Feb 17 16:03:08 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 16:03:08 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:08 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:09 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 16:03:10 crc kubenswrapper[4874]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 16:03:10 crc kubenswrapper[4874]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 16:03:10 crc kubenswrapper[4874]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 16:03:10 crc kubenswrapper[4874]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 16:03:10 crc kubenswrapper[4874]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 16:03:10 crc kubenswrapper[4874]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.200464 4874 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.209895 4874 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.209928 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.209938 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.209948 4874 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.209956 4874 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.209965 4874 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.209975 4874 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.209984 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.209992 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210000 4874 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210007 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210015 4874 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210023 4874 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210030 4874 feature_gate.go:330] unrecognized feature gate: Example Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210038 4874 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210048 4874 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210058 4874 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210067 4874 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210100 4874 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210108 4874 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210116 4874 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210124 4874 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210135 4874 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210147 4874 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210156 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210165 4874 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210173 4874 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210182 4874 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210189 4874 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210197 4874 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210205 4874 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210219 4874 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210228 4874 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210236 4874 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210244 4874 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210251 4874 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210261 4874 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210268 4874 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210276 4874 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210288 4874 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210298 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210306 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210315 4874 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210323 4874 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210331 4874 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210339 4874 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210348 4874 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210357 4874 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210366 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210375 4874 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210383 4874 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210392 4874 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210401 4874 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210410 4874 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210422 4874 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210431 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210439 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210447 4874 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210456 4874 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210465 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210473 4874 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210481 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210490 4874 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210498 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210507 4874 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210514 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210522 4874 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210530 4874 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210538 4874 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210546 4874 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.210554 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210710 4874 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210730 4874 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210743 4874 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210754 4874 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210765 4874 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210775 4874 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210786 4874 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210797 4874 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210807 4874 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210817 4874 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210826 4874 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210847 4874 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210856 4874 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210865 4874 flags.go:64] FLAG: --cgroup-root="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210874 4874 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210883 4874 flags.go:64] FLAG: --client-ca-file="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210892 4874 flags.go:64] FLAG: --cloud-config="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210901 4874 flags.go:64] FLAG: --cloud-provider="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210909 4874 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210921 4874 flags.go:64] FLAG: --cluster-domain="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210929 4874 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210939 4874 flags.go:64] FLAG: --config-dir="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210948 4874 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210958 4874 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210970 4874 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210981 4874 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.210990 4874 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211000 4874 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211008 4874 flags.go:64] FLAG: --contention-profiling="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211019 4874 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211423 4874 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211442 4874 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211455 4874 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211468 4874 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211477 4874 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211487 4874 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211496 4874 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211505 4874 flags.go:64] FLAG: --enable-server="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211514 4874 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211526 4874 flags.go:64] FLAG: --event-burst="100" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211536 4874 flags.go:64] FLAG: --event-qps="50" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211546 4874 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211555 4874 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211564 4874 flags.go:64] FLAG: --eviction-hard="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211574 4874 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211584 4874 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211593 4874 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211605 4874 flags.go:64] FLAG: --eviction-soft="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211614 4874 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211623 4874 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211632 4874 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211641 4874 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211650 4874 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211659 4874 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211668 4874 flags.go:64] FLAG: --feature-gates="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211679 4874 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211687 4874 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211697 4874 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211706 4874 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211715 4874 flags.go:64] FLAG: --healthz-port="10248" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211725 4874 flags.go:64] FLAG: --help="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211733 4874 flags.go:64] FLAG: --hostname-override="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211742 4874 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211751 4874 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211761 4874 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211770 4874 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211779 4874 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211787 4874 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211796 4874 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211805 4874 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211815 4874 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211824 4874 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211834 4874 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211843 4874 flags.go:64] FLAG: --kube-reserved="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211852 4874 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211861 4874 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211870 4874 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211880 4874 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211889 4874 flags.go:64] FLAG: --lock-file="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211898 4874 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211907 4874 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211916 4874 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211941 4874 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211951 4874 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211960 4874 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211970 4874 flags.go:64] FLAG: --logging-format="text" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211979 4874 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211988 4874 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.211997 4874 flags.go:64] FLAG: --manifest-url="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212006 4874 flags.go:64] FLAG: --manifest-url-header="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212018 4874 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212027 4874 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212038 4874 flags.go:64] FLAG: --max-pods="110" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212047 4874 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212056 4874 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212066 4874 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212104 4874 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212114 4874 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212123 4874 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212131 4874 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212151 4874 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212160 4874 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212169 4874 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212178 4874 flags.go:64] FLAG: --pod-cidr="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212186 4874 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212199 4874 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212208 4874 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212217 4874 flags.go:64] FLAG: --pods-per-core="0" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212226 4874 flags.go:64] FLAG: --port="10250" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212237 4874 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212246 4874 flags.go:64] FLAG: --provider-id="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212255 4874 flags.go:64] FLAG: --qos-reserved="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212264 4874 flags.go:64] FLAG: --read-only-port="10255" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212273 4874 flags.go:64] FLAG: --register-node="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212282 4874 flags.go:64] FLAG: --register-schedulable="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212291 4874 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212305 4874 flags.go:64] FLAG: --registry-burst="10" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212314 4874 flags.go:64] FLAG: --registry-qps="5" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212324 4874 flags.go:64] FLAG: --reserved-cpus="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212335 4874 flags.go:64] FLAG: --reserved-memory="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212346 4874 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212356 4874 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212365 4874 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212374 4874 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212383 4874 flags.go:64] FLAG: --runonce="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212396 4874 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212406 4874 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212415 4874 flags.go:64] FLAG: --seccomp-default="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212424 4874 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212434 4874 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212443 4874 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212453 4874 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212462 4874 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212471 4874 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212480 4874 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212489 4874 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212498 4874 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212507 4874 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212516 4874 flags.go:64] FLAG: --system-cgroups="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212525 4874 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212539 4874 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212548 4874 flags.go:64] FLAG: --tls-cert-file="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212560 4874 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212576 4874 flags.go:64] FLAG: --tls-min-version="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212587 4874 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212598 4874 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212610 4874 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212621 4874 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212656 4874 flags.go:64] FLAG: --v="2" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212669 4874 flags.go:64] FLAG: --version="false" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212680 4874 flags.go:64] FLAG: --vmodule="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212690 4874 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.212700 4874 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212914 4874 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212926 4874 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212936 4874 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212944 4874 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212956 4874 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212964 4874 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212972 4874 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212980 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212988 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.212996 4874 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213006 4874 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213015 4874 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213023 4874 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213032 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213039 4874 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213047 4874 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213055 4874 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213063 4874 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213096 4874 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213104 4874 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213112 4874 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213120 4874 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213128 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213136 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213144 4874 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213152 4874 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213159 4874 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213167 4874 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213175 4874 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213183 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213191 4874 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213199 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213207 4874 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213214 4874 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213222 4874 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213232 4874 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213246 4874 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213255 4874 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213265 4874 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213273 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213282 4874 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213291 4874 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213301 4874 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213314 4874 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213326 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213338 4874 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213351 4874 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213363 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213374 4874 feature_gate.go:330] unrecognized feature gate: Example Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213384 4874 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213394 4874 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213403 4874 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213411 4874 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213420 4874 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213428 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213436 4874 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213444 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213452 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213460 4874 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213468 4874 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213476 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213484 4874 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213492 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213500 4874 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213508 4874 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213520 4874 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213532 4874 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213544 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213558 4874 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213569 4874 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.213579 4874 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.213591 4874 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.224388 4874 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.224436 4874 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224568 4874 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224582 4874 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224628 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224640 4874 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224654 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224663 4874 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224672 4874 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224681 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224689 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224698 4874 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224706 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224715 4874 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224723 4874 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224730 4874 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224740 4874 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224749 4874 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224757 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224766 4874 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224775 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224784 4874 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224793 4874 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224802 4874 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224810 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224818 4874 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224827 4874 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224835 4874 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224843 4874 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224851 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224859 4874 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224866 4874 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224874 4874 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224882 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224890 4874 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224901 4874 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224911 4874 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224921 4874 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224929 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224954 4874 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224964 4874 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224975 4874 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224987 4874 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.224996 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225005 4874 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225014 4874 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225023 4874 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225032 4874 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225042 4874 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225050 4874 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225059 4874 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225067 4874 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225102 4874 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225111 4874 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225120 4874 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225128 4874 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225137 4874 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225148 4874 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225158 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225166 4874 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225175 4874 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225184 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225193 4874 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225202 4874 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225210 4874 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225218 4874 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225226 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225234 4874 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225242 4874 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225249 4874 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225257 4874 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225264 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225274 4874 feature_gate.go:330] unrecognized feature gate: Example Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.225287 4874 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225522 4874 feature_gate.go:330] unrecognized feature gate: Example Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225533 4874 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225541 4874 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225550 4874 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225561 4874 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225572 4874 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225581 4874 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225590 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225599 4874 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225608 4874 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225616 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225624 4874 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225632 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225640 4874 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225648 4874 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225656 4874 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225664 4874 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225672 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225680 4874 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225688 4874 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225696 4874 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225703 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225711 4874 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225719 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225726 4874 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225734 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225742 4874 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225749 4874 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225757 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225764 4874 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225773 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225780 4874 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225788 4874 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225795 4874 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225806 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225815 4874 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225823 4874 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225831 4874 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225839 4874 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225846 4874 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225857 4874 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225866 4874 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225875 4874 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225884 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225892 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225900 4874 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225909 4874 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225917 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225927 4874 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225935 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225943 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225951 4874 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225959 4874 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225967 4874 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225974 4874 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225982 4874 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225990 4874 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.225998 4874 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226006 4874 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226013 4874 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226023 4874 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226033 4874 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226041 4874 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226050 4874 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226058 4874 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226067 4874 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226098 4874 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226106 4874 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226114 4874 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226123 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.226132 4874 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.226144 4874 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.226413 4874 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.232361 4874 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.232512 4874 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.234649 4874 server.go:997] "Starting client certificate rotation" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.234699 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.236714 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-30 10:38:53.593218704 +0000 UTC Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.236830 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.262893 4874 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.266670 4874 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.268257 4874 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.283819 4874 log.go:25] "Validated CRI v1 runtime API" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.324151 4874 log.go:25] "Validated CRI v1 image API" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.326612 4874 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.331256 4874 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-15-58-37-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.331298 4874 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.359013 4874 manager.go:217] Machine: {Timestamp:2026-02-17 16:03:10.356490446 +0000 UTC m=+0.650879077 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:496eb863-febf-403f-bc40-ce30c0c4d225 BootID:6be8f3a4-e6e3-4cf0-93a0-9444be233e11 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:dc:29:0f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:dc:29:0f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cd:97:86 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:35:a6:44 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6c:df:e5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2f:cf:b8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:96:4b:05:00:ce Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:3e:30:bd:2d:54 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.359516 4874 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.359763 4874 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.360486 4874 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.360766 4874 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.360822 4874 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.361162 4874 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.361179 4874 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.362253 4874 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.362305 4874 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.362563 4874 state_mem.go:36] "Initialized new in-memory state store" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.362689 4874 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.366289 4874 kubelet.go:418] "Attempting to sync node with API server" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.366325 4874 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.366366 4874 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.366385 4874 kubelet.go:324] "Adding apiserver pod source" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.366403 4874 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.371134 4874 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.372043 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.372048 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.372224 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.372228 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.372337 4874 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.374887 4874 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376592 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376646 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376666 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376716 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376748 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376765 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376779 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376800 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376815 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376828 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376863 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.376877 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.377935 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.378613 4874 server.go:1280] "Started kubelet" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.379682 4874 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.379822 4874 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.380303 4874 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 16:03:10 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.380952 4874 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.381168 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.381208 4874 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.381321 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:43:01.388481655 +0000 UTC Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.397839 4874 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.397888 4874 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.398176 4874 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.398824 4874 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.398828 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="200ms" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.397819 4874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18951427fa77608a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 16:03:10.37856577 +0000 UTC m=+0.672954361,LastTimestamp:2026-02-17 16:03:10.37856577 +0000 UTC m=+0.672954361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.399852 4874 factory.go:55] Registering systemd factory Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.399892 4874 factory.go:221] Registration of the systemd container factory successfully Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.401293 4874 server.go:460] "Adding debug handlers to kubelet server" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.401311 4874 factory.go:153] Registering CRI-O factory Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.401581 4874 factory.go:221] Registration of the crio container factory successfully Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.401704 4874 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.401743 4874 factory.go:103] Registering Raw factory Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.401781 4874 manager.go:1196] Started watching for new ooms in manager Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.401367 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.402528 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.402903 4874 manager.go:319] Starting recovery of all containers Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.412902 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415216 4874 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415301 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415333 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415354 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415381 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415403 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415425 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415445 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415470 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415490 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415509 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415529 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415548 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415571 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415590 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415612 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415637 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415708 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415735 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415844 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415879 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415900 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415923 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415943 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.415994 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416016 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416039 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416062 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416154 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416179 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416198 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416255 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416273 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416294 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416315 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416345 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416369 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416389 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416408 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416428 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416447 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416467 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416485 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416504 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416524 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416542 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416560 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416580 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416599 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416618 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416637 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416657 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416684 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416706 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416727 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416748 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416768 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416789 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416808 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416827 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416847 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416867 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416886 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416905 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416925 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416943 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416961 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416979 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.416999 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417018 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417036 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417054 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417072 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417131 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417150 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417169 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417188 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417206 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417223 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417242 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417261 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417278 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417295 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417313 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417332 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417353 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417375 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417394 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417412 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417432 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417449 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417467 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417485 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417503 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417520 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417541 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417560 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417580 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417598 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417618 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417637 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417656 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417674 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417692 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417720 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417740 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417762 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417783 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417803 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417823 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417842 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417864 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417887 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417904 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417923 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417943 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417962 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417980 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.417997 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418015 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418034 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418052 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418069 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418123 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418152 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418177 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418202 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418219 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418236 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418253 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418271 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418289 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418307 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418325 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418342 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418359 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418378 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418395 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418412 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418432 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418449 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418468 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418486 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418503 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418521 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418538 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418555 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418572 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418591 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418609 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418626 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418644 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418662 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418678 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418694 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418714 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418732 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418749 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418766 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418792 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418810 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418827 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418845 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418871 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418892 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418909 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418925 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418943 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418963 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418979 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.418997 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419015 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419032 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419049 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419067 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419117 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419146 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419164 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419181 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419200 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419221 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419237 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419254 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419272 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419291 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419361 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419380 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419400 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419417 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419435 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419454 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419471 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419488 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419506 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419523 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419541 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419559 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419577 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419594 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419613 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419631 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419651 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419669 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419687 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419704 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419721 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419738 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419756 4874 reconstruct.go:97] "Volume reconstruction finished" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.419768 4874 reconciler.go:26] "Reconciler: start to sync state" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.436128 4874 manager.go:324] Recovery completed Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.453865 4874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.454352 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.455812 4874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.455887 4874 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.455917 4874 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.455974 4874 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.458722 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.458764 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.458776 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: W0217 16:03:10.459168 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.459242 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.460859 4874 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.460885 4874 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.460921 4874 state_mem.go:36] "Initialized new in-memory state store" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.484212 4874 policy_none.go:49] "None policy: Start" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.485292 4874 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.485350 4874 state_mem.go:35] "Initializing new in-memory state store" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.499213 4874 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.552489 4874 manager.go:334] "Starting Device Plugin manager" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.552555 4874 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.552571 4874 server.go:79] "Starting device plugin registration server" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.553030 4874 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.553053 4874 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.553276 4874 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.553360 4874 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.553370 4874 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.557241 4874 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.557375 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.558573 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.558613 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.558627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.558794 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.559934 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.560000 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.565347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.565417 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.565437 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.565647 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.565868 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.565938 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.568170 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.568214 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.568227 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.568337 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.568358 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.568368 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.569873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.569911 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.569923 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.570058 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.570209 4874 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.570281 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.570345 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.571323 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.571361 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.571375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.571553 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.571854 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.571921 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.572268 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.572323 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.572346 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.572385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.572409 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.572455 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.572752 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.572856 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.573496 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.573534 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.573550 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.575486 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.575529 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.575572 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.600412 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="400ms" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625098 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625152 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625188 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625222 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625252 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625282 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625339 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625368 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625439 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625498 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625529 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625558 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625589 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625617 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.625641 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.653892 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.655365 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.655447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.655475 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.655522 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.656257 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727248 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727351 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727421 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727454 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727519 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727528 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727565 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727599 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727631 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727499 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727575 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727551 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727766 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727817 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727845 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727862 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727904 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727914 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727947 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727943 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727931 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.727984 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.728014 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.728110 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.728119 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.728166 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.728214 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.728228 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.728254 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.728218 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.856505 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.858015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.858113 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.858146 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.858186 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:10 crc kubenswrapper[4874]: E0217 16:03:10.858831 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.917607 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.934519 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.950780 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.977045 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 16:03:10 crc kubenswrapper[4874]: I0217 16:03:10.984226 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:11 crc kubenswrapper[4874]: E0217 16:03:11.001804 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="800ms" Feb 17 16:03:11 crc kubenswrapper[4874]: W0217 16:03:11.015003 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-90bcf734b1ca307393a191fa6d7227da606004dbf39dc0411037e0deb6abcdaf WatchSource:0}: Error finding container 90bcf734b1ca307393a191fa6d7227da606004dbf39dc0411037e0deb6abcdaf: Status 404 returned error can't find the container with id 90bcf734b1ca307393a191fa6d7227da606004dbf39dc0411037e0deb6abcdaf Feb 17 16:03:11 crc kubenswrapper[4874]: W0217 16:03:11.016627 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ec125e521a980fc1ab2ef962556f10807479b338572a786878005930abad7d5c WatchSource:0}: Error finding container ec125e521a980fc1ab2ef962556f10807479b338572a786878005930abad7d5c: Status 404 returned error can't find the container with id ec125e521a980fc1ab2ef962556f10807479b338572a786878005930abad7d5c Feb 17 16:03:11 crc kubenswrapper[4874]: W0217 16:03:11.028137 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-30f97668dfc213e4f8a23762512609b00cd92f230c047333e43f09155da20119 WatchSource:0}: Error finding container 30f97668dfc213e4f8a23762512609b00cd92f230c047333e43f09155da20119: Status 404 returned error can't find the container with id 30f97668dfc213e4f8a23762512609b00cd92f230c047333e43f09155da20119 Feb 17 16:03:11 crc kubenswrapper[4874]: W0217 16:03:11.031421 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-49f288fd6999d4749c4e03570c6629e74b08b92d7578d8d977b92a1c97639f37 WatchSource:0}: Error finding container 49f288fd6999d4749c4e03570c6629e74b08b92d7578d8d977b92a1c97639f37: Status 404 returned error can't find the container with id 49f288fd6999d4749c4e03570c6629e74b08b92d7578d8d977b92a1c97639f37 Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.259565 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.261601 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.261642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.261651 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.261678 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:11 crc kubenswrapper[4874]: E0217 16:03:11.262193 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.381166 4874 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.385512 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:23:48.850146798 +0000 UTC Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.463748 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"49f288fd6999d4749c4e03570c6629e74b08b92d7578d8d977b92a1c97639f37"} Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.465425 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"30f97668dfc213e4f8a23762512609b00cd92f230c047333e43f09155da20119"} Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.466600 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ed1efe8ecebdf548a4e0d14d4abb998d3e08893f4401deaacbd707d3e199be58"} Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.467553 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"90bcf734b1ca307393a191fa6d7227da606004dbf39dc0411037e0deb6abcdaf"} Feb 17 16:03:11 crc kubenswrapper[4874]: I0217 16:03:11.468601 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec125e521a980fc1ab2ef962556f10807479b338572a786878005930abad7d5c"} Feb 17 16:03:11 crc kubenswrapper[4874]: W0217 16:03:11.507059 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4874]: E0217 16:03:11.507223 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:11 crc kubenswrapper[4874]: W0217 16:03:11.532450 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4874]: E0217 16:03:11.532587 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:11 crc kubenswrapper[4874]: W0217 16:03:11.628191 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4874]: E0217 16:03:11.628322 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:11 crc kubenswrapper[4874]: W0217 16:03:11.709259 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4874]: E0217 16:03:11.709366 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:11 crc kubenswrapper[4874]: E0217 16:03:11.810496 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="1.6s" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.063194 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.065069 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.065134 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.065145 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.065171 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:12 crc kubenswrapper[4874]: E0217 16:03:12.065552 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.358304 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 16:03:12 crc kubenswrapper[4874]: E0217 16:03:12.359207 4874 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.381178 4874 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.386479 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:15:31.727025223 +0000 UTC Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.473686 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3" exitCode=0 Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.473800 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.473789 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3"} Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.474983 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.475031 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.475053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.476247 4874 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6" exitCode=0 Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.476671 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.476799 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6"} Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.477560 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.478017 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.478054 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.478101 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.478675 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.478700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.478710 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.479475 4874 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711" exitCode=0 Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.479521 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.479567 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711"} Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.481487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.481507 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.481518 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.487553 4874 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c" exitCode=0 Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.487724 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c"} Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.487914 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.489835 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.489878 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.489897 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.491628 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782"} Feb 17 16:03:12 crc kubenswrapper[4874]: I0217 16:03:12.491689 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.380665 4874 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.386882 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:47:00.783202632 +0000 UTC Feb 17 16:03:13 crc kubenswrapper[4874]: E0217 16:03:13.411866 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="3.2s" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.498873 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.498913 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.498986 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.500667 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.500704 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.500716 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.503814 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.503834 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.503844 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.503854 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.506838 4874 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f" exitCode=0 Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.506894 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.506963 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.507794 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.507826 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.507835 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.509040 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"aff60094f799686e9f12b1d1205a7ef68133f64cbb1c81000a87bc68ead3ab93"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.509125 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.509786 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.509829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.509848 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.512325 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.512352 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.512363 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9"} Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.512431 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.513400 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.513426 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.513435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4874]: W0217 16:03:13.568604 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:13 crc kubenswrapper[4874]: E0217 16:03:13.568713 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.665639 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.667215 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.667266 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.667278 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4874]: I0217 16:03:13.667304 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:13 crc kubenswrapper[4874]: E0217 16:03:13.667802 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 17 16:03:13 crc kubenswrapper[4874]: W0217 16:03:13.820302 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:13 crc kubenswrapper[4874]: E0217 16:03:13.820402 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:13 crc kubenswrapper[4874]: W0217 16:03:13.859360 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 17 16:03:13 crc kubenswrapper[4874]: E0217 16:03:13.859479 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.386986 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:48:52.334098013 +0000 UTC Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.519546 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab"} Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.519648 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.521202 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.521243 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.521260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.523484 4874 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a" exitCode=0 Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.523609 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.523648 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.523639 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a"} Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.523670 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.523689 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.524022 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.524928 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.524998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525016 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525164 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525190 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525202 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525278 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525295 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525513 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525537 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.525555 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.706104 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:14 crc kubenswrapper[4874]: I0217 16:03:14.858460 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.387609 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:17:02.133323807 +0000 UTC Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.530624 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3"} Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.530685 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.530691 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763"} Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.530739 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052"} Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.530783 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.530846 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.531827 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.531957 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.532010 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.532141 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.532185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.532199 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:15 crc kubenswrapper[4874]: I0217 16:03:15.537033 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.388060 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 11:39:51.419883977 +0000 UTC Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.539793 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.539803 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b"} Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.539857 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.539874 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd"} Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.539894 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.542105 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.542143 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.542159 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.542289 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.542338 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.542370 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.717948 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.868204 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.870434 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.870495 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.870514 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:16 crc kubenswrapper[4874]: I0217 16:03:16.870555 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.389063 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:35:43.381791587 +0000 UTC Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.542491 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.542563 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.542653 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.543940 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.543999 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.544011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.544214 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.544255 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:17 crc kubenswrapper[4874]: I0217 16:03:17.544272 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:18 crc kubenswrapper[4874]: I0217 16:03:18.390915 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:15:37.784854727 +0000 UTC Feb 17 16:03:18 crc kubenswrapper[4874]: I0217 16:03:18.703147 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:18 crc kubenswrapper[4874]: I0217 16:03:18.703366 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:18 crc kubenswrapper[4874]: I0217 16:03:18.704781 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:18 crc kubenswrapper[4874]: I0217 16:03:18.704834 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:18 crc kubenswrapper[4874]: I0217 16:03:18.704852 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:19 crc kubenswrapper[4874]: I0217 16:03:19.314941 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:19 crc kubenswrapper[4874]: I0217 16:03:19.315203 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:19 crc kubenswrapper[4874]: I0217 16:03:19.316923 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:19 crc kubenswrapper[4874]: I0217 16:03:19.316985 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:19 crc kubenswrapper[4874]: I0217 16:03:19.317007 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:19 crc kubenswrapper[4874]: I0217 16:03:19.392100 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:46:39.975301501 +0000 UTC Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.220662 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.220898 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.222599 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.222665 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.222684 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.392729 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:36:55.39998577 +0000 UTC Feb 17 16:03:20 crc kubenswrapper[4874]: E0217 16:03:20.571384 4874 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.605152 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.605374 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.606868 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.606973 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.606991 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.653732 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.653957 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.655564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.655623 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:20 crc kubenswrapper[4874]: I0217 16:03:20.655642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:21 crc kubenswrapper[4874]: I0217 16:03:21.393456 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:43:36.685192295 +0000 UTC Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.025644 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.025868 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.030693 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.030751 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.030768 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.034795 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.394215 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:42:17.467366798 +0000 UTC Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.556201 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.557961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.558134 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.558164 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:22 crc kubenswrapper[4874]: I0217 16:03:22.561812 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:23 crc kubenswrapper[4874]: I0217 16:03:23.295385 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:23 crc kubenswrapper[4874]: I0217 16:03:23.394430 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:16:34.352998631 +0000 UTC Feb 17 16:03:23 crc kubenswrapper[4874]: I0217 16:03:23.558838 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:23 crc kubenswrapper[4874]: I0217 16:03:23.560254 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:23 crc kubenswrapper[4874]: I0217 16:03:23.560314 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:23 crc kubenswrapper[4874]: I0217 16:03:23.560329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:24 crc kubenswrapper[4874]: I0217 16:03:24.382155 4874 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 16:03:24 crc kubenswrapper[4874]: W0217 16:03:24.393944 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 17 16:03:24 crc kubenswrapper[4874]: I0217 16:03:24.394072 4874 trace.go:236] Trace[1908778415]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 16:03:14.392) (total time: 10001ms): Feb 17 16:03:24 crc kubenswrapper[4874]: Trace[1908778415]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:03:24.393) Feb 17 16:03:24 crc kubenswrapper[4874]: Trace[1908778415]: [10.00182215s] [10.00182215s] END Feb 17 16:03:24 crc kubenswrapper[4874]: E0217 16:03:24.394132 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 17 16:03:24 crc kubenswrapper[4874]: I0217 16:03:24.394904 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:34:25.475094098 +0000 UTC Feb 17 16:03:24 crc kubenswrapper[4874]: I0217 16:03:24.561910 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:24 crc kubenswrapper[4874]: I0217 16:03:24.568963 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:24 crc kubenswrapper[4874]: I0217 16:03:24.568992 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:24 crc kubenswrapper[4874]: I0217 16:03:24.569001 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:24 crc kubenswrapper[4874]: I0217 16:03:24.671047 4874 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54098->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 16:03:24 crc kubenswrapper[4874]: I0217 16:03:24.671201 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54098->192.168.126.11:17697: read: connection reset by peer" Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.033428 4874 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.033524 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.040483 4874 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.040577 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.395431 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:36:56.13149303 +0000 UTC Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.548222 4874 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]log ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]etcd ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/priority-and-fairness-filter ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-apiextensions-informers ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-apiextensions-controllers ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/crd-informer-synced ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-system-namespaces-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 17 16:03:25 crc kubenswrapper[4874]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 17 16:03:25 crc kubenswrapper[4874]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/bootstrap-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/start-kube-aggregator-informers ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/apiservice-registration-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/apiservice-discovery-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]autoregister-completion ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/apiservice-openapi-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 17 16:03:25 crc kubenswrapper[4874]: livez check failed Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.548288 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.566419 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.569267 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab" exitCode=255 Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.569378 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab"} Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.569780 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.571051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.571098 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.571109 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:25 crc kubenswrapper[4874]: I0217 16:03:25.571679 4874 scope.go:117] "RemoveContainer" containerID="41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab" Feb 17 16:03:26 crc kubenswrapper[4874]: I0217 16:03:26.296323 4874 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 16:03:26 crc kubenswrapper[4874]: I0217 16:03:26.296385 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 16:03:26 crc kubenswrapper[4874]: I0217 16:03:26.396448 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:03:17.298698019 +0000 UTC Feb 17 16:03:26 crc kubenswrapper[4874]: I0217 16:03:26.579324 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 16:03:26 crc kubenswrapper[4874]: I0217 16:03:26.582027 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732"} Feb 17 16:03:26 crc kubenswrapper[4874]: I0217 16:03:26.582389 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:26 crc kubenswrapper[4874]: I0217 16:03:26.583647 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:26 crc kubenswrapper[4874]: I0217 16:03:26.583706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:26 crc kubenswrapper[4874]: I0217 16:03:26.583725 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:27 crc kubenswrapper[4874]: I0217 16:03:27.397007 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:14:44.057800233 +0000 UTC Feb 17 16:03:28 crc kubenswrapper[4874]: I0217 16:03:28.398166 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:09:58.80800119 +0000 UTC Feb 17 16:03:28 crc kubenswrapper[4874]: I0217 16:03:28.703827 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:28 crc kubenswrapper[4874]: I0217 16:03:28.704064 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:28 crc kubenswrapper[4874]: I0217 16:03:28.705428 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:28 crc kubenswrapper[4874]: I0217 16:03:28.705494 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:28 crc kubenswrapper[4874]: I0217 16:03:28.705515 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:29 crc kubenswrapper[4874]: I0217 16:03:29.399114 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:24:53.491647327 +0000 UTC Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.028524 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.031392 4874 trace.go:236] Trace[533142501]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 16:03:18.277) (total time: 11753ms): Feb 17 16:03:30 crc kubenswrapper[4874]: Trace[533142501]: ---"Objects listed" error: 11753ms (16:03:30.031) Feb 17 16:03:30 crc kubenswrapper[4874]: Trace[533142501]: [11.753774213s] [11.753774213s] END Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.031434 4874 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.031847 4874 trace.go:236] Trace[1802538642]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 16:03:18.471) (total time: 11559ms): Feb 17 16:03:30 crc kubenswrapper[4874]: Trace[1802538642]: ---"Objects listed" error: 11559ms (16:03:30.031) Feb 17 16:03:30 crc kubenswrapper[4874]: Trace[1802538642]: [11.559822802s] [11.559822802s] END Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.031871 4874 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.032836 4874 trace.go:236] Trace[706334760]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 16:03:19.197) (total time: 10834ms): Feb 17 16:03:30 crc kubenswrapper[4874]: Trace[706334760]: ---"Objects listed" error: 10834ms (16:03:30.032) Feb 17 16:03:30 crc kubenswrapper[4874]: Trace[706334760]: [10.834870959s] [10.834870959s] END Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.032855 4874 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.033497 4874 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.033725 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.040478 4874 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.068535 4874 csr.go:261] certificate signing request csr-v8kzh is approved, waiting to be issued Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.083838 4874 csr.go:257] certificate signing request csr-v8kzh is issued Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.233475 4874 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 16:03:30 crc kubenswrapper[4874]: W0217 16:03:30.233632 4874 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 16:03:30 crc kubenswrapper[4874]: W0217 16:03:30.233651 4874 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 16:03:30 crc kubenswrapper[4874]: W0217 16:03:30.233723 4874 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.233677 4874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd/events\": read tcp 38.102.83.73:33716->38.102.83.73:6443: use of closed network connection" event="&Event{ObjectMeta:{etcd-crc.1895142877c73504 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 16:03:12.480949508 +0000 UTC m=+2.775338079,LastTimestamp:2026-02-17 16:03:12.480949508 +0000 UTC m=+2.775338079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.378388 4874 apiserver.go:52] "Watching apiserver" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.387277 4874 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.387545 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-j77hc","openshift-machine-config-operator/machine-config-daemon-cccdg","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.387892 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.387914 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.387948 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.388283 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.388348 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.388570 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.388598 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.388606 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.388768 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.388800 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.388870 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-j77hc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.390384 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.391430 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.391546 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.391670 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.391687 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.391776 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.391861 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.391877 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.392055 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.392282 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.392748 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.393040 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.394654 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.394900 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.395449 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.396504 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.398700 4874 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.399287 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:17:25.337713466 +0000 UTC Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.401772 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.408642 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.435725 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.435773 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.435868 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.435903 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.435925 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.435947 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.435967 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.435991 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436014 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436038 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436059 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436099 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436141 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436162 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436160 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436183 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436207 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436228 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436250 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436272 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436293 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436285 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436314 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436336 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436358 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436381 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436404 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436425 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436446 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436468 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436489 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436510 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436556 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436579 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436602 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436625 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436648 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436337 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436671 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436696 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436724 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436746 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436769 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436879 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436908 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436931 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436960 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437019 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437043 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437067 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437107 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437131 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437154 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437178 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437202 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437227 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437250 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437278 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437304 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437326 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437351 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437373 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437393 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437415 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437435 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437456 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437477 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437503 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437525 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437544 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437565 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437588 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437611 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437632 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437676 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437698 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437721 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437743 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437765 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437789 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437811 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438019 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438043 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438066 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438107 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438150 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438174 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438199 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438223 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438245 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438274 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438299 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438321 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438344 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438368 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438390 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438412 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438434 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438457 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438480 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438502 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438525 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438547 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436667 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438578 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438568 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438666 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438698 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438723 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438748 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438771 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438793 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438798 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438814 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438837 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438860 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438884 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438906 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438913 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438902 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438930 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438949 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439001 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439034 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437810 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439057 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439320 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439732 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439760 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439786 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439809 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439833 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439855 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439878 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439900 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439921 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439945 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439966 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.439988 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440010 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440033 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440054 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440093 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440117 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440140 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440164 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440186 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440208 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440231 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440254 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440277 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440298 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440320 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440360 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440382 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440405 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440428 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440455 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440477 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440500 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440523 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440543 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440567 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440598 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440627 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440655 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440704 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440731 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440755 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440778 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440799 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440821 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440843 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440864 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440885 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440906 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440928 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440947 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.440982 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441004 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441027 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441049 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441090 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441113 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441134 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441157 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441181 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441203 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441225 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441246 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441272 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441294 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441315 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441336 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441356 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441381 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441401 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441423 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441444 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441467 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441490 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441511 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441534 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441558 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441580 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441604 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441656 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441686 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441711 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441742 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441770 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bclhx\" (UniqueName: \"kubernetes.io/projected/75d87243-c32f-4eb1-9049-24409fc6ea39-kube-api-access-bclhx\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441801 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.487662 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436375 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436659 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436810 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.436949 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437119 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437333 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437536 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.488683 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437664 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437777 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437768 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437899 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.437915 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438095 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438224 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438283 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438374 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438541 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438525 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.438583 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.441614 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.441842 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:30.941810574 +0000 UTC m=+21.236199175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.441908 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.442302 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.442599 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.442823 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.443066 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.443287 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.475148 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.476966 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.477336 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.477388 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.477487 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.477522 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.477634 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.477754 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.477772 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.477941 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.478123 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.478224 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.478294 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.478350 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.478466 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.478480 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.478612 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.478624 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.478677 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.480034 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.480336 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.480414 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.480778 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.480839 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.481204 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.481610 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.481731 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.482374 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.482551 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.482554 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.482618 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.482897 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.483160 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.483428 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.483481 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.483715 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.483777 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.484389 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.484420 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.484738 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.484783 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.485014 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.485405 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.485742 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.485749 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.486059 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.487242 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.489520 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.487284 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.487546 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.489558 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.487610 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.487814 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.487844 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.487909 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.488142 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.488404 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.488641 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.488894 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.488900 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.489098 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.489289 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.487485 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.489696 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.489862 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.490409 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.490848 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.490946 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.490965 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.491317 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.491602 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.491621 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.491851 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.494686 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.494860 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.494918 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495162 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495191 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495352 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495391 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495577 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495632 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495702 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495706 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495754 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.496067 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.495251 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.496200 4874 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.496306 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.496354 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.499617 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.499800 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.499862 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.499893 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.499907 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500033 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500041 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500285 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500411 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.497046 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500521 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500585 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500599 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500654 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500837 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.501191 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.500866 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.501308 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.501357 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.501386 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.501454 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.501686 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.501697 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.501290 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.502255 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.502302 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.502252 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.502598 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.502608 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.502828 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.502888 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.502976 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbrrm\" (UniqueName: \"kubernetes.io/projected/17e6a08f-68c0-4b0a-a396-9dddcc726d37-kube-api-access-lbrrm\") pod \"node-resolver-j77hc\" (UID: \"17e6a08f-68c0-4b0a-a396-9dddcc726d37\") " pod="openshift-dns/node-resolver-j77hc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503089 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503175 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503248 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503323 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/75d87243-c32f-4eb1-9049-24409fc6ea39-rootfs\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503248 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503399 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503478 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.503520 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:31.003484897 +0000 UTC m=+21.297873588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503547 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503556 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503697 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.503751 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.501985 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.504020 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.504050 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505493 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75d87243-c32f-4eb1-9049-24409fc6ea39-mcd-auth-proxy-config\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505522 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505543 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/17e6a08f-68c0-4b0a-a396-9dddcc726d37-hosts-file\") pod \"node-resolver-j77hc\" (UID: \"17e6a08f-68c0-4b0a-a396-9dddcc726d37\") " pod="openshift-dns/node-resolver-j77hc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505550 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505559 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75d87243-c32f-4eb1-9049-24409fc6ea39-proxy-tls\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505632 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.505895 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.504329 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.504558 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.504658 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.504675 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.504773 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.506219 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.506188 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505051 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505358 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505365 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.505418 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.506237 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:31.006222339 +0000 UTC m=+21.300610900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.506608 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.506303 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.506570 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507398 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507464 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507528 4874 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507585 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507635 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507643 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507696 4874 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507707 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507719 4874 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507729 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507738 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507751 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507762 4874 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507772 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507781 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507790 4874 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507801 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507811 4874 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507821 4874 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507831 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507843 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507852 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507862 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507872 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507881 4874 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507890 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507901 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507910 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507920 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507930 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507941 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507951 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507961 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507970 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507979 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507988 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.507996 4874 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508003 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508014 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508023 4874 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508032 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508042 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508052 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508061 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508118 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508128 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508137 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508147 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508157 4874 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508166 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508175 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508184 4874 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508193 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508201 4874 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508209 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508217 4874 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508226 4874 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508234 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508243 4874 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508252 4874 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508260 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508269 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508279 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508288 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508296 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508305 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508313 4874 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508322 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508331 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508339 4874 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508348 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508356 4874 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508365 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508373 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508383 4874 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508391 4874 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508400 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508408 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508416 4874 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508424 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508435 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508442 4874 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508451 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508460 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508468 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508475 4874 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508501 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508509 4874 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508518 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508526 4874 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508535 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508544 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508552 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508560 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508572 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508581 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508590 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508599 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508607 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508616 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508625 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508634 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508645 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508653 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508662 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508671 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508680 4874 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508688 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508697 4874 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508705 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508714 4874 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508723 4874 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508733 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508745 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508758 4874 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508767 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508776 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508784 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508794 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508802 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508811 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508820 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508829 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508837 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508845 4874 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508854 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508864 4874 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508873 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508881 4874 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508889 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508897 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508905 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508913 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508920 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508929 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508938 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508947 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508955 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508963 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508971 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508980 4874 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508989 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.508998 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509006 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509015 4874 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509023 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509032 4874 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509041 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509050 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509059 4874 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509067 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509088 4874 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509098 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509106 4874 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509115 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509123 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509132 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509139 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509150 4874 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509159 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509167 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509176 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.509185 4874 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.512733 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.513133 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.513852 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.514646 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.515998 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.516409 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.520310 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.524116 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.524539 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.524513 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.524709 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.524734 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.524751 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.524791 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.524814 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.524829 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.524836 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:31.024812798 +0000 UTC m=+21.319201429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:30 crc kubenswrapper[4874]: E0217 16:03:30.524881 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:31.024863459 +0000 UTC m=+21.319252080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.524919 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.525362 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.525815 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.527675 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.528414 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.528528 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.528813 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.528869 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.529046 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.529485 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.533410 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.538702 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.539457 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.539625 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.543230 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.544734 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.544861 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.545400 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.546121 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.548323 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.549377 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.549437 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.551383 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.551838 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.552936 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.555704 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.559719 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.559871 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.562347 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.567962 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.568330 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.570516 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.572782 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.573902 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.574439 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.575804 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.578338 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.579200 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.579577 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.580924 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.581480 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.585743 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.586317 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.587529 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.591432 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.592260 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.592730 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.594906 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.596380 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.597105 4874 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.597288 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.599147 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.599824 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.601123 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.601589 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.601711 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.605192 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.606706 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.607441 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.608868 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609763 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75d87243-c32f-4eb1-9049-24409fc6ea39-proxy-tls\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609802 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609809 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bclhx\" (UniqueName: \"kubernetes.io/projected/75d87243-c32f-4eb1-9049-24409fc6ea39-kube-api-access-bclhx\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609838 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609869 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbrrm\" (UniqueName: \"kubernetes.io/projected/17e6a08f-68c0-4b0a-a396-9dddcc726d37-kube-api-access-lbrrm\") pod \"node-resolver-j77hc\" (UID: \"17e6a08f-68c0-4b0a-a396-9dddcc726d37\") " pod="openshift-dns/node-resolver-j77hc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609900 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/75d87243-c32f-4eb1-9049-24409fc6ea39-rootfs\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609925 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609942 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/17e6a08f-68c0-4b0a-a396-9dddcc726d37-hosts-file\") pod \"node-resolver-j77hc\" (UID: \"17e6a08f-68c0-4b0a-a396-9dddcc726d37\") " pod="openshift-dns/node-resolver-j77hc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609958 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75d87243-c32f-4eb1-9049-24409fc6ea39-mcd-auth-proxy-config\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609983 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.609994 4874 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610022 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610034 4874 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610042 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610050 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610059 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610067 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610237 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/75d87243-c32f-4eb1-9049-24409fc6ea39-rootfs\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610520 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610565 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610603 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/17e6a08f-68c0-4b0a-a396-9dddcc726d37-hosts-file\") pod \"node-resolver-j77hc\" (UID: \"17e6a08f-68c0-4b0a-a396-9dddcc726d37\") " pod="openshift-dns/node-resolver-j77hc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610660 4874 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610677 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610691 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610703 4874 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610715 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610727 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610739 4874 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610750 4874 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610763 4874 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610774 4874 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610788 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610800 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610812 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610825 4874 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610837 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610849 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610860 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610872 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.610883 4874 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.611108 4874 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.611123 4874 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.611134 4874 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.611146 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.611322 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.611616 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75d87243-c32f-4eb1-9049-24409fc6ea39-mcd-auth-proxy-config\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.612132 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.612995 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.614673 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.615133 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75d87243-c32f-4eb1-9049-24409fc6ea39-proxy-tls\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.615544 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.616740 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.617666 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.619132 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.619950 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.620837 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.621302 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.621473 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.622248 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.622777 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.623343 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.624711 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.625237 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.634567 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.634978 4874 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.635048 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.635936 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbrrm\" (UniqueName: \"kubernetes.io/projected/17e6a08f-68c0-4b0a-a396-9dddcc726d37-kube-api-access-lbrrm\") pod \"node-resolver-j77hc\" (UID: \"17e6a08f-68c0-4b0a-a396-9dddcc726d37\") " pod="openshift-dns/node-resolver-j77hc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.637386 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.637546 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-65qcw"] Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.642142 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bclhx\" (UniqueName: \"kubernetes.io/projected/75d87243-c32f-4eb1-9049-24409fc6ea39-kube-api-access-bclhx\") pod \"machine-config-daemon-cccdg\" (UID: \"75d87243-c32f-4eb1-9049-24409fc6ea39\") " pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.642486 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hswwv"] Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.642699 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.643431 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7xphw"] Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.643557 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.643889 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2vkxj"] Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.644471 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.644647 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.644654 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.644722 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.644668 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.644802 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.645047 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.645291 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.645939 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.646481 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.646622 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.646841 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.647929 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.648674 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.648943 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.650030 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.650358 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.650598 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.650704 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.650782 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.650933 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.653313 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.658496 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.666140 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.677680 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.690681 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.701787 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.706199 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.712554 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-slash\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.712661 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-var-lib-openvswitch\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.712741 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-var-lib-cni-bin\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.712824 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-netns\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.712889 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-var-lib-kubelet\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.712951 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-netd\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713012 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/10a4777a-2390-401b-86b0-87d298e9f883-kube-api-access-7xrf7\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713101 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxcq\" (UniqueName: \"kubernetes.io/projected/9bcec56b-03b2-401b-8a73-6d62f42ba22c-kube-api-access-xnxcq\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713171 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-env-overrides\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713239 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-system-cni-dir\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713304 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-os-release\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713376 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-var-lib-cni-multus\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713460 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-etc-kubernetes\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713538 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7nmg\" (UniqueName: \"kubernetes.io/projected/8aedd049-0029-44f7-869f-4a3ccdce8413-kube-api-access-m7nmg\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713610 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713678 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-run-k8s-cni-cncf-io\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713747 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-ovn-kubernetes\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713812 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/10a4777a-2390-401b-86b0-87d298e9f883-ovn-node-metrics-cert\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713878 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-openvswitch\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713974 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-log-socket\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714045 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-os-release\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.713874 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714136 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-cni-dir\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714343 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-bin\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714411 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-config\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714476 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-node-log\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714559 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-system-cni-dir\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714647 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8aedd049-0029-44f7-869f-4a3ccdce8413-cni-binary-copy\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714728 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-socket-dir-parent\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714806 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-cnibin\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714883 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.714946 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9bcec56b-03b2-401b-8a73-6d62f42ba22c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715013 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-daemon-config\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715135 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-systemd-units\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715241 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-cnibin\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715322 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9bcec56b-03b2-401b-8a73-6d62f42ba22c-cni-binary-copy\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715384 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-hostroot\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715472 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-etc-openvswitch\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715535 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-ovn\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715601 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/54846556-797a-4e8d-ab51-aef5343b1fc8-host\") pod \"node-ca-7xphw\" (UID: \"54846556-797a-4e8d-ab51-aef5343b1fc8\") " pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715663 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdtfv\" (UniqueName: \"kubernetes.io/projected/54846556-797a-4e8d-ab51-aef5343b1fc8-kube-api-access-tdtfv\") pod \"node-ca-7xphw\" (UID: \"54846556-797a-4e8d-ab51-aef5343b1fc8\") " pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715747 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-run-multus-certs\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715835 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-script-lib\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.715914 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/54846556-797a-4e8d-ab51-aef5343b1fc8-serviceca\") pod \"node-ca-7xphw\" (UID: \"54846556-797a-4e8d-ab51-aef5343b1fc8\") " pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.716019 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-systemd\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.716159 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-kubelet\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.716258 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-run-netns\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.716308 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-conf-dir\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.718182 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:30 crc kubenswrapper[4874]: W0217 16:03:30.718977 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-4999a517a22c740cdf23e83fc68f42d4fdf3b4d1c99fd58dcdbd5c9b7861a7d8 WatchSource:0}: Error finding container 4999a517a22c740cdf23e83fc68f42d4fdf3b4d1c99fd58dcdbd5c9b7861a7d8: Status 404 returned error can't find the container with id 4999a517a22c740cdf23e83fc68f42d4fdf3b4d1c99fd58dcdbd5c9b7861a7d8 Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.724211 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.729756 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.732374 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.743561 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: W0217 16:03:30.751290 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75d87243_c32f_4eb1_9049_24409fc6ea39.slice/crio-1d82b360de8897093559cfafaa2c4568921b6f446bf952d04938e788ce0b5943 WatchSource:0}: Error finding container 1d82b360de8897093559cfafaa2c4568921b6f446bf952d04938e788ce0b5943: Status 404 returned error can't find the container with id 1d82b360de8897093559cfafaa2c4568921b6f446bf952d04938e788ce0b5943 Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.751427 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.766627 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.767158 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.767222 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.793797 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.805440 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.805461 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.813344 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-j77hc" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.816803 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-os-release\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.816854 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-cni-dir\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.816886 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-node-log\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.816911 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-bin\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.816931 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-config\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.816948 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-system-cni-dir\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.816965 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8aedd049-0029-44f7-869f-4a3ccdce8413-cni-binary-copy\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.816985 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-socket-dir-parent\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817004 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-cnibin\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817023 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817044 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9bcec56b-03b2-401b-8a73-6d62f42ba22c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817090 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-daemon-config\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817113 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-systemd-units\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817130 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-cnibin\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817157 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-etc-openvswitch\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817178 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-ovn\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817198 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/54846556-797a-4e8d-ab51-aef5343b1fc8-host\") pod \"node-ca-7xphw\" (UID: \"54846556-797a-4e8d-ab51-aef5343b1fc8\") " pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817203 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-os-release\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817217 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9bcec56b-03b2-401b-8a73-6d62f42ba22c-cni-binary-copy\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817264 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-hostroot\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817293 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdtfv\" (UniqueName: \"kubernetes.io/projected/54846556-797a-4e8d-ab51-aef5343b1fc8-kube-api-access-tdtfv\") pod \"node-ca-7xphw\" (UID: \"54846556-797a-4e8d-ab51-aef5343b1fc8\") " pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817311 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-run-multus-certs\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817343 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-systemd\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817361 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-script-lib\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817378 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/54846556-797a-4e8d-ab51-aef5343b1fc8-serviceca\") pod \"node-ca-7xphw\" (UID: \"54846556-797a-4e8d-ab51-aef5343b1fc8\") " pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817409 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-kubelet\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817424 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-run-netns\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817439 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-conf-dir\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817458 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-slash\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817474 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-var-lib-openvswitch\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817488 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-var-lib-cni-bin\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817515 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-netns\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817533 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-netd\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817549 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/10a4777a-2390-401b-86b0-87d298e9f883-kube-api-access-7xrf7\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817567 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnxcq\" (UniqueName: \"kubernetes.io/projected/9bcec56b-03b2-401b-8a73-6d62f42ba22c-kube-api-access-xnxcq\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817588 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-var-lib-kubelet\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817608 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-env-overrides\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817636 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-system-cni-dir\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817664 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-os-release\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817710 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7nmg\" (UniqueName: \"kubernetes.io/projected/8aedd049-0029-44f7-869f-4a3ccdce8413-kube-api-access-m7nmg\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817731 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817750 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-run-k8s-cni-cncf-io\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817765 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-var-lib-cni-multus\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817780 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-etc-kubernetes\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817799 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-openvswitch\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817815 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-log-socket\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817823 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-config\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817840 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-ovn-kubernetes\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817857 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/10a4777a-2390-401b-86b0-87d298e9f883-ovn-node-metrics-cert\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817962 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9bcec56b-03b2-401b-8a73-6d62f42ba22c-cni-binary-copy\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817989 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-node-log\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.817965 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-cni-dir\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818017 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-bin\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818057 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-system-cni-dir\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818475 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-netd\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818557 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-cnibin\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818497 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-etc-openvswitch\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818526 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-systemd-units\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818615 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-etc-kubernetes\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818644 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818671 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-run-k8s-cni-cncf-io\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818695 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-var-lib-cni-multus\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818684 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-kubelet\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818719 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-run-netns\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818745 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-run-multus-certs\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818767 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-hostroot\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818767 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-systemd\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818813 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-slash\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818905 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-conf-dir\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818946 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-log-socket\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818981 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-openvswitch\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819019 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-var-lib-kubelet\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819244 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-daemon-config\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819303 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-multus-socket-dir-parent\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818106 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819464 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-ovn-kubernetes\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819491 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-ovn\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819512 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/54846556-797a-4e8d-ab51-aef5343b1fc8-host\") pod \"node-ca-7xphw\" (UID: \"54846556-797a-4e8d-ab51-aef5343b1fc8\") " pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819566 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-var-lib-openvswitch\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819605 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-netns\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819640 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-system-cni-dir\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819667 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-host-var-lib-cni-bin\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.818171 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-cnibin\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.819901 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8aedd049-0029-44f7-869f-4a3ccdce8413-os-release\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.820487 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/54846556-797a-4e8d-ab51-aef5343b1fc8-serviceca\") pod \"node-ca-7xphw\" (UID: \"54846556-797a-4e8d-ab51-aef5343b1fc8\") " pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.822559 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9bcec56b-03b2-401b-8a73-6d62f42ba22c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.822685 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-script-lib\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.822906 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-env-overrides\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.825358 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8aedd049-0029-44f7-869f-4a3ccdce8413-cni-binary-copy\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.825432 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/10a4777a-2390-401b-86b0-87d298e9f883-ovn-node-metrics-cert\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.825489 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9bcec56b-03b2-401b-8a73-6d62f42ba22c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.832559 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.838601 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnxcq\" (UniqueName: \"kubernetes.io/projected/9bcec56b-03b2-401b-8a73-6d62f42ba22c-kube-api-access-xnxcq\") pod \"multus-additional-cni-plugins-hswwv\" (UID: \"9bcec56b-03b2-401b-8a73-6d62f42ba22c\") " pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.838649 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/10a4777a-2390-401b-86b0-87d298e9f883-kube-api-access-7xrf7\") pod \"ovnkube-node-65qcw\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.840723 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7nmg\" (UniqueName: \"kubernetes.io/projected/8aedd049-0029-44f7-869f-4a3ccdce8413-kube-api-access-m7nmg\") pod \"multus-2vkxj\" (UID: \"8aedd049-0029-44f7-869f-4a3ccdce8413\") " pod="openshift-multus/multus-2vkxj" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.841228 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdtfv\" (UniqueName: \"kubernetes.io/projected/54846556-797a-4e8d-ab51-aef5343b1fc8-kube-api-access-tdtfv\") pod \"node-ca-7xphw\" (UID: \"54846556-797a-4e8d-ab51-aef5343b1fc8\") " pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: W0217 16:03:30.844513 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17e6a08f_68c0_4b0a_a396_9dddcc726d37.slice/crio-a400b9fed04aa9a16024d74452a108e8bfd868e5664537d020e4df984bc1c4ed WatchSource:0}: Error finding container a400b9fed04aa9a16024d74452a108e8bfd868e5664537d020e4df984bc1c4ed: Status 404 returned error can't find the container with id a400b9fed04aa9a16024d74452a108e8bfd868e5664537d020e4df984bc1c4ed Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.956932 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.965070 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hswwv" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.974834 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7xphw" Feb 17 16:03:30 crc kubenswrapper[4874]: I0217 16:03:30.991823 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2vkxj" Feb 17 16:03:31 crc kubenswrapper[4874]: W0217 16:03:31.013887 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a4777a_2390_401b_86b0_87d298e9f883.slice/crio-e8a5805694369d8a201e12c8c17cc4f11b2d8cbcb971525d54ebf2a7332be74f WatchSource:0}: Error finding container e8a5805694369d8a201e12c8c17cc4f11b2d8cbcb971525d54ebf2a7332be74f: Status 404 returned error can't find the container with id e8a5805694369d8a201e12c8c17cc4f11b2d8cbcb971525d54ebf2a7332be74f Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.018800 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.018916 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.018958 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.019120 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.019193 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.019173273 +0000 UTC m=+22.313561834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.019260 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.019215984 +0000 UTC m=+22.313604675 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.019312 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.019420 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.019398049 +0000 UTC m=+22.313786610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.087531 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 15:58:30 +0000 UTC, rotation deadline is 2026-12-16 20:12:29.420342888 +0000 UTC Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.087608 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7252h8m58.332737655s for next certificate rotation Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.119468 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.119522 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.119642 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.119665 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.119677 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.119723 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.119710897 +0000 UTC m=+22.414099458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.119767 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.119778 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.119785 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:31 crc kubenswrapper[4874]: E0217 16:03:31.119813 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.11979672 +0000 UTC m=+22.414185281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.399611 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:22:14.455689054 +0000 UTC Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.598258 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.598301 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4999a517a22c740cdf23e83fc68f42d4fdf3b4d1c99fd58dcdbd5c9b7861a7d8"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.600031 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e" exitCode=0 Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.600148 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.600209 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"e8a5805694369d8a201e12c8c17cc4f11b2d8cbcb971525d54ebf2a7332be74f"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.600955 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d7a68dd5c690ffb5a6548d1092a79f2ae5c00a8fcd014d307bd9b9d78c60ab70"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.603320 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.603358 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.603369 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"1d82b360de8897093559cfafaa2c4568921b6f446bf952d04938e788ce0b5943"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.604415 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7xphw" event={"ID":"54846556-797a-4e8d-ab51-aef5343b1fc8","Type":"ContainerStarted","Data":"ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.604455 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7xphw" event={"ID":"54846556-797a-4e8d-ab51-aef5343b1fc8","Type":"ContainerStarted","Data":"103bdc1268620ee48489956b25e62a3939b72b7eb4fd1c9e1569d25c79f56b80"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.605303 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-j77hc" event={"ID":"17e6a08f-68c0-4b0a-a396-9dddcc726d37","Type":"ContainerStarted","Data":"d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.605327 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-j77hc" event={"ID":"17e6a08f-68c0-4b0a-a396-9dddcc726d37","Type":"ContainerStarted","Data":"a400b9fed04aa9a16024d74452a108e8bfd868e5664537d020e4df984bc1c4ed"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.606350 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerStarted","Data":"9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.606390 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerStarted","Data":"29593fdd814121687c7ab0eb670561b0e9d5b2d99b5cea18f7a13e4f81a35c9e"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.607852 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.607891 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.607902 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5289935bd7f593c38f12efd8deebd70ab8d35b9e2072f5280d333107f90838c2"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.609193 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2vkxj" event={"ID":"8aedd049-0029-44f7-869f-4a3ccdce8413","Type":"ContainerStarted","Data":"0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.609219 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2vkxj" event={"ID":"8aedd049-0029-44f7-869f-4a3ccdce8413","Type":"ContainerStarted","Data":"6fd1cb0611a232478da646a3387d8e0ddc5f2b1eb1bbeea4b4d190ee42c086a5"} Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.613323 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.626584 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.637639 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.647198 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.656441 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.670370 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.685516 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.699315 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.712803 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.730160 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.750072 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.762368 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.790258 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.802657 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.818706 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.831741 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.845741 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.857590 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.872059 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.884143 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.903154 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.921212 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.954630 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:31 crc kubenswrapper[4874]: I0217 16:03:31.983536 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.003196 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.024587 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.028574 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.028723 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:34.028699698 +0000 UTC m=+24.323088259 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.028881 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.029011 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.029267 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:34.029259873 +0000 UTC m=+24.323648434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.029412 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.029555 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.029633 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:34.029616732 +0000 UTC m=+24.324005293 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.041528 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.064468 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.130718 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.130801 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.130932 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.130950 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.130962 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.131011 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:34.130995699 +0000 UTC m=+24.425384270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.131066 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.131105 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.131115 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.131140 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:34.131130923 +0000 UTC m=+24.425519484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.400559 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:39:05.614696282 +0000 UTC Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.456960 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.457122 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.456978 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.457197 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.457222 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:32 crc kubenswrapper[4874]: E0217 16:03:32.457403 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.462190 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.463298 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.614871 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f"} Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.614920 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131"} Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.614936 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242"} Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.614949 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1"} Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.614963 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0"} Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.616333 4874 generic.go:334] "Generic (PLEG): container finished" podID="9bcec56b-03b2-401b-8a73-6d62f42ba22c" containerID="9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63" exitCode=0 Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.616408 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerDied","Data":"9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63"} Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.660595 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.685945 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.696732 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.715927 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.730005 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.749195 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.759872 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.773175 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.790999 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.800955 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.822623 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.837498 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.849741 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:32 crc kubenswrapper[4874]: I0217 16:03:32.861710 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.301004 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.306856 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.311740 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.321913 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.335195 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.350282 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.370208 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.384667 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.399641 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.401588 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:52:12.985522174 +0000 UTC Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.415940 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.428990 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.466012 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.487474 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.502119 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.514759 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.530688 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.553820 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.572455 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.595557 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.606324 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.621799 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047"} Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.625136 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07"} Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.627204 4874 generic.go:334] "Generic (PLEG): container finished" podID="9bcec56b-03b2-401b-8a73-6d62f42ba22c" containerID="ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9" exitCode=0 Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.627285 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerDied","Data":"ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9"} Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.628207 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.642852 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.664263 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.680616 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.698229 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.716930 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.733671 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.748834 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.760478 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.780947 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.795245 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.818329 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.833044 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.845834 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.860652 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.876506 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.893336 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.921191 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4874]: I0217 16:03:33.947970 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.000092 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.012345 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.026395 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.041622 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.049560 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.049682 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.049727 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:38.049698673 +0000 UTC m=+28.344087224 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.049760 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.049811 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:38.049796015 +0000 UTC m=+28.344184636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.049800 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.049962 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.050008 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:38.049997431 +0000 UTC m=+28.344386072 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.055551 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.067720 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.084333 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.104852 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.150578 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.150633 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.150768 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.150785 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.150799 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.150854 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:38.150838793 +0000 UTC m=+28.445227364 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.150943 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.150999 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.151026 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.151181 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:38.151146822 +0000 UTC m=+28.445535423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.402438 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:05:20.976331424 +0000 UTC Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.456680 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.456698 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.456863 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.456716 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.456929 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:34 crc kubenswrapper[4874]: E0217 16:03:34.456961 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.633631 4874 generic.go:334] "Generic (PLEG): container finished" podID="9bcec56b-03b2-401b-8a73-6d62f42ba22c" containerID="5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61" exitCode=0 Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.633681 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerDied","Data":"5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61"} Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.653178 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.668953 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.704047 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.771302 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.792839 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.806540 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.834230 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.856471 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.889115 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.903916 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.916629 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.925809 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.942706 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.957311 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4874]: I0217 16:03:34.970775 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.403267 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:12:55.027698355 +0000 UTC Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.644261 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433"} Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.647861 4874 generic.go:334] "Generic (PLEG): container finished" podID="9bcec56b-03b2-401b-8a73-6d62f42ba22c" containerID="2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090" exitCode=0 Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.647910 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerDied","Data":"2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090"} Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.668984 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.698330 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.720384 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.740661 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.771223 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.787318 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.818897 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.836329 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.856309 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.873227 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.890739 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.909879 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.928510 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.940917 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4874]: I0217 16:03:35.967001 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.403504 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:09:37.282308001 +0000 UTC Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.433940 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.436576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.436628 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.436647 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.436776 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.446354 4874 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.446670 4874 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.448217 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.448264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.448282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.448304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.448322 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:36Z","lastTransitionTime":"2026-02-17T16:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.457134 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.457369 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:36 crc kubenswrapper[4874]: E0217 16:03:36.457570 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.457702 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:36 crc kubenswrapper[4874]: E0217 16:03:36.457844 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:36 crc kubenswrapper[4874]: E0217 16:03:36.457980 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:36 crc kubenswrapper[4874]: E0217 16:03:36.471196 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.477477 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.477516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.477533 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.477554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.477571 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:36Z","lastTransitionTime":"2026-02-17T16:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:36 crc kubenswrapper[4874]: E0217 16:03:36.505231 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.513403 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.513472 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.513491 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.513520 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.513539 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:36Z","lastTransitionTime":"2026-02-17T16:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:36 crc kubenswrapper[4874]: E0217 16:03:36.538394 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.544823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.544895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.544919 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.544947 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.544964 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:36Z","lastTransitionTime":"2026-02-17T16:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:36 crc kubenswrapper[4874]: E0217 16:03:36.568208 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.573284 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.573379 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.573404 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.573439 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.573464 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:36Z","lastTransitionTime":"2026-02-17T16:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:36 crc kubenswrapper[4874]: E0217 16:03:36.595287 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: E0217 16:03:36.595567 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.598243 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.598310 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.598331 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.598359 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.598378 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:36Z","lastTransitionTime":"2026-02-17T16:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.656274 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerStarted","Data":"442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0"} Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.672021 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.693584 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.700802 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.700849 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.700867 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.700890 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.700910 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:36Z","lastTransitionTime":"2026-02-17T16:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.713009 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.735279 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.752742 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.774501 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.791615 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.803419 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.803457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.803473 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.803496 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.803514 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:36Z","lastTransitionTime":"2026-02-17T16:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.807934 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.820576 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.833192 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.845371 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.867221 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.880509 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.906500 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.906585 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.906602 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.906634 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.906650 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:36Z","lastTransitionTime":"2026-02-17T16:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.912181 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4874]: I0217 16:03:36.942474 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.010771 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.010839 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.010863 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.010895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.010918 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.113862 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.113908 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.113921 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.113938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.113950 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.216801 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.217223 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.217235 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.217252 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.217264 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.319515 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.319553 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.319562 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.319577 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.319587 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.404312 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:17:27.96780731 +0000 UTC Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.463783 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.463817 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.463826 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.463838 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.463848 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.566654 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.566686 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.566694 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.566727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.566739 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.664702 4874 generic.go:334] "Generic (PLEG): container finished" podID="9bcec56b-03b2-401b-8a73-6d62f42ba22c" containerID="442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0" exitCode=0 Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.664825 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerDied","Data":"442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.679507 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.680610 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.680675 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.689396 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.689447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.689463 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.689486 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.689499 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.697715 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.709679 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.713422 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.718178 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.728388 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.747677 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.764970 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.784066 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.792906 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.792953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.792963 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.792980 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.792999 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.799146 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.812132 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.831834 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.847725 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.865710 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.882019 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.896258 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.896329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.896348 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.896375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.896395 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.902382 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.922274 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.940513 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.972217 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.991414 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.999392 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.999450 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.999470 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.999493 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:37 crc kubenswrapper[4874]: I0217 16:03:37.999513 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:37Z","lastTransitionTime":"2026-02-17T16:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.012209 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.028301 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.049903 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.069140 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.092665 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.092779 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.092996 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.093066 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:46.093015426 +0000 UTC m=+36.387404027 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.093242 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.093527 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.093106 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.093666 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:46.093609271 +0000 UTC m=+36.387998032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.093702 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:46.093684823 +0000 UTC m=+36.388073654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.102916 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.102978 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.103001 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.103036 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.103059 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.108211 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.140611 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.164981 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.182446 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.194686 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.194804 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.194961 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.194961 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.194990 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.195008 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.195017 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.195026 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.195123 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:46.19507218 +0000 UTC m=+36.489460771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.195151 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:46.195140102 +0000 UTC m=+36.489528703 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.207870 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.208602 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.208642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.208659 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.208682 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.208698 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.234853 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.255332 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.278757 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.311488 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.311716 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.311784 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.311851 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.311930 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.405439 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:08:08.747505008 +0000 UTC Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.415236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.415309 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.415335 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.415365 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.415390 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.456695 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.456697 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.456886 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.456958 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.456733 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:38 crc kubenswrapper[4874]: E0217 16:03:38.457203 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.519029 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.519140 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.519169 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.519201 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.519223 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.622501 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.622564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.622581 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.622604 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.622622 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.687704 4874 generic.go:334] "Generic (PLEG): container finished" podID="9bcec56b-03b2-401b-8a73-6d62f42ba22c" containerID="fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b" exitCode=0 Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.687896 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.689647 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerDied","Data":"fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.697709 4874 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.708500 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.713969 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.725299 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.725358 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.725377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.725404 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.725428 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.735555 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.762848 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.786610 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.805385 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.820642 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.828974 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.829035 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.829053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.829169 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.829189 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.853030 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.868438 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.897782 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.914541 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.933484 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.933551 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.933569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.933596 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.933613 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.934315 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.963316 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4874]: I0217 16:03:38.984273 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.001470 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.020863 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.036573 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.036629 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.036647 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.036687 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.036705 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.041300 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.054101 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.071950 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.087951 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.109425 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.125616 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.139193 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.139228 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.139240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.139258 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.139272 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.148948 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.161884 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.191337 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.208706 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.224225 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.236603 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.241307 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.241371 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.241393 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.241481 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.241502 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.252475 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.267688 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.296814 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.344305 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.344360 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.344373 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.344396 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.344410 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.407153 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:39:33.492779077 +0000 UTC Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.448169 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.448225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.448248 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.448269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.448282 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.550779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.550824 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.550836 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.550856 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.550867 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.559376 4874 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.654627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.654690 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.654724 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.654758 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.654776 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.697892 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" event={"ID":"9bcec56b-03b2-401b-8a73-6d62f42ba22c","Type":"ContainerStarted","Data":"ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.697988 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.734694 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.751614 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.757154 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.757220 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.757237 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.757262 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.757279 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.768930 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.791284 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.810667 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.837367 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.860101 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.860568 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.861181 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.861198 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.861220 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.861234 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.879491 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.900119 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.928735 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.948031 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.964915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.965270 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.965450 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.965582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.965710 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4874]: I0217 16:03:39.999189 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.021678 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.036651 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.049975 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.068364 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.068426 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.068443 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.068470 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.068487 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.170874 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.170912 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.170923 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.170941 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.170953 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.273596 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.273667 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.273681 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.273699 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.273713 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.376289 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.376347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.376360 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.376382 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.376394 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.407588 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:40:57.688831121 +0000 UTC Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.456725 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.456771 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.456726 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:40 crc kubenswrapper[4874]: E0217 16:03:40.456916 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:40 crc kubenswrapper[4874]: E0217 16:03:40.457033 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:40 crc kubenswrapper[4874]: E0217 16:03:40.457179 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.475957 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.478709 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.478751 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.478768 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.478792 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.478810 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.490136 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.513060 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.533316 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.549017 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.566453 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.582679 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.582718 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.582730 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.582746 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.582758 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.583324 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.596211 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.615471 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.633507 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.652580 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.669799 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.685600 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.685638 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.685648 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.685664 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.685677 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.689401 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.705248 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/0.log" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.709594 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9" exitCode=1 Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.709733 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.710433 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.710954 4874 scope.go:117] "RemoveContainer" containerID="982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.745804 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.775714 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"message\\\":\\\"versions/factory.go:140\\\\nI0217 16:03:40.149841 6153 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149899 6153 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149957 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150140 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150362 6153 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:40.150663 6153 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:40.151115 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:03:40.151140 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:03:40.151167 6153 factory.go:656] Stopping watch factory\\\\nI0217 16:03:40.151185 6153 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:40.151214 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.788729 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.788759 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.788769 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.788784 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.788794 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.794574 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.814348 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.827735 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.849852 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.869383 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.886739 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.891282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.891326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.891342 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.891364 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.891380 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.901192 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.920491 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.936294 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.950045 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.967040 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.984420 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.993742 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.993800 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.993815 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.993838 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4874]: I0217 16:03:40.993855 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.011868 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.027574 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.096451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.096516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.096534 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.096564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.096583 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.199482 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.199559 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.199574 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.199601 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.199617 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.302240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.302286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.302300 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.302316 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.302327 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.405434 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.405511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.405528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.405552 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.405566 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.408467 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:23:11.934655188 +0000 UTC Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.509200 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.509272 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.509296 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.509330 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.509366 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.612583 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.612629 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.612642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.612658 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.612668 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.719874 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.719938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.719955 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.719979 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.719997 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.721846 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/0.log" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.725594 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.725775 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.744964 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.762783 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.781419 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.807251 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.822523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.822599 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.822619 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.822644 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.822661 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.828961 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.853981 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.874478 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.892275 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.916177 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"message\\\":\\\"versions/factory.go:140\\\\nI0217 16:03:40.149841 6153 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149899 6153 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149957 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150140 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150362 6153 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:40.150663 6153 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:40.151115 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:03:40.151140 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:03:40.151167 6153 factory.go:656] Stopping watch factory\\\\nI0217 16:03:40.151185 6153 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:40.151214 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.925934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.925983 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.926000 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.926027 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.926044 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.936522 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.952425 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.972673 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4874]: I0217 16:03:41.993634 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.011650 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.029664 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.029734 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.029752 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.029776 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.029793 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.037208 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.133321 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.133367 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.133385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.133407 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.133424 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.236544 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.236622 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.236641 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.236671 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.236693 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.340036 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.340140 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.340158 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.340183 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.340200 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.346432 4874 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.408978 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:35:00.452357347 +0000 UTC Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.443601 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.443667 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.443684 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.443718 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.443736 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.457011 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.457105 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.457223 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:42 crc kubenswrapper[4874]: E0217 16:03:42.457219 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:42 crc kubenswrapper[4874]: E0217 16:03:42.457319 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:42 crc kubenswrapper[4874]: E0217 16:03:42.457426 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.545861 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.545934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.545955 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.545980 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.545999 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.648754 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.649274 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.649435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.649582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.649741 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.737272 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/1.log" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.738785 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/0.log" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.742687 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2" exitCode=1 Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.742755 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.743263 4874 scope.go:117] "RemoveContainer" containerID="982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.745855 4874 scope.go:117] "RemoveContainer" containerID="0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2" Feb 17 16:03:42 crc kubenswrapper[4874]: E0217 16:03:42.747230 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\"" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.751829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.751873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.751886 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.751905 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.751918 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.777424 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"message\\\":\\\"versions/factory.go:140\\\\nI0217 16:03:40.149841 6153 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149899 6153 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149957 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150140 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150362 6153 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:40.150663 6153 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:40.151115 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:03:40.151140 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:03:40.151167 6153 factory.go:656] Stopping watch factory\\\\nI0217 16:03:40.151185 6153 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:40.151214 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.781345 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22"] Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.782045 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.788036 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.788340 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.805255 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.822763 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.843033 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.844615 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.844697 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kzt5\" (UniqueName: \"kubernetes.io/projected/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-kube-api-access-5kzt5\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.844761 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.844801 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.854732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.855044 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.855231 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.855378 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.855513 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.867249 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.892709 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.913915 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.930580 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.946623 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.946692 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kzt5\" (UniqueName: \"kubernetes.io/projected/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-kube-api-access-5kzt5\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.946737 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.946775 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.946861 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.947893 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.948253 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.959720 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.960121 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.960302 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.960333 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.960365 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.960388 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.973955 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.974592 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kzt5\" (UniqueName: \"kubernetes.io/projected/30e2d430-8c4b-4246-971e-6ba0ed8a0de9-kube-api-access-5kzt5\") pod \"ovnkube-control-plane-749d76644c-5dr22\" (UID: \"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:42 crc kubenswrapper[4874]: I0217 16:03:42.991479 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.024963 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.046664 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.063716 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.063787 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.063811 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.063845 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.063872 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.072144 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.089448 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.109293 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.109674 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.124601 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: W0217 16:03:43.129493 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30e2d430_8c4b_4246_971e_6ba0ed8a0de9.slice/crio-9a372fc33e9bb463987969886e18a3ed6c43b56fb2f33d7dca15fe90de21f9c4 WatchSource:0}: Error finding container 9a372fc33e9bb463987969886e18a3ed6c43b56fb2f33d7dca15fe90de21f9c4: Status 404 returned error can't find the container with id 9a372fc33e9bb463987969886e18a3ed6c43b56fb2f33d7dca15fe90de21f9c4 Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.144667 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.165866 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.167315 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.167358 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.167369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.167385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.167398 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.189240 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.214041 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.234957 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.254723 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.269612 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.269664 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.269681 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.269706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.269723 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.279146 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.295646 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.330459 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.350361 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.369473 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.373285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.373356 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.373378 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.373405 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.373423 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.387994 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.409926 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:35:17.250559913 +0000 UTC Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.419590 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"message\\\":\\\"versions/factory.go:140\\\\nI0217 16:03:40.149841 6153 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149899 6153 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149957 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150140 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150362 6153 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:40.150663 6153 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:40.151115 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:03:40.151140 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:03:40.151167 6153 factory.go:656] Stopping watch factory\\\\nI0217 16:03:40.151185 6153 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:40.151214 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.438273 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.475987 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.476050 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.476067 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.476118 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.476137 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.580477 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.580539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.580556 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.580580 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.580602 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.684421 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.684490 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.684513 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.684541 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.684562 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.749168 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/1.log" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.754864 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" event={"ID":"30e2d430-8c4b-4246-971e-6ba0ed8a0de9","Type":"ContainerStarted","Data":"53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.754907 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" event={"ID":"30e2d430-8c4b-4246-971e-6ba0ed8a0de9","Type":"ContainerStarted","Data":"3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.754921 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" event={"ID":"30e2d430-8c4b-4246-971e-6ba0ed8a0de9","Type":"ContainerStarted","Data":"9a372fc33e9bb463987969886e18a3ed6c43b56fb2f33d7dca15fe90de21f9c4"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.780239 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.787109 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.787151 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.787162 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.787179 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.787210 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.798377 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.815406 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.834466 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.854091 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.872271 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.890789 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.890848 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.890864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.890889 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.890908 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.891382 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.910670 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.933547 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.948375 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.980590 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.993640 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.993699 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.993717 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.993745 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4874]: I0217 16:03:43.993763 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.009161 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"message\\\":\\\"versions/factory.go:140\\\\nI0217 16:03:40.149841 6153 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149899 6153 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149957 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150140 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150362 6153 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:40.150663 6153 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:40.151115 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:03:40.151140 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:03:40.151167 6153 factory.go:656] Stopping watch factory\\\\nI0217 16:03:40.151185 6153 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:40.151214 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.026165 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.040835 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.060462 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.079468 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.096700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.096781 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.096797 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.096823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.096841 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.200008 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.200059 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.200108 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.200132 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.200150 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.302503 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.302565 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.302589 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.302619 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.302642 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.406868 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.406948 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.406971 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.407003 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.407026 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.411133 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:07:41.23116145 +0000 UTC Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.456921 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.456987 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.457001 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:44 crc kubenswrapper[4874]: E0217 16:03:44.457143 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:44 crc kubenswrapper[4874]: E0217 16:03:44.457387 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:44 crc kubenswrapper[4874]: E0217 16:03:44.457605 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.509624 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.509705 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.509724 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.509748 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.509768 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.612514 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.612573 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.612590 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.612612 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.612631 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.715750 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.715816 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.715841 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.715868 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.715884 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.751497 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pm48m"] Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.752386 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:44 crc kubenswrapper[4874]: E0217 16:03:44.752509 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.790005 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://982af57ef77a52c65dcbeedfcbf592a4f0f02f8fa5804529428806f63e8a6ba9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"message\\\":\\\"versions/factory.go:140\\\\nI0217 16:03:40.149841 6153 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149899 6153 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.149957 6153 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150140 6153 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:40.150362 6153 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:40.150663 6153 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:40.151115 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:03:40.151140 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:03:40.151167 6153 factory.go:656] Stopping watch factory\\\\nI0217 16:03:40.151185 6153 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:40.151214 6153 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.807195 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.818781 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.818833 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.818850 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.818872 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.818889 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.826599 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.841853 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.861469 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.866790 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktn2z\" (UniqueName: \"kubernetes.io/projected/672da34f-1e37-4e2c-b467-b5ee40c4a31b-kube-api-access-ktn2z\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.866895 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.881574 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.908117 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.922412 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.922456 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.922469 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.922491 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.922505 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.928696 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.952142 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.967550 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktn2z\" (UniqueName: \"kubernetes.io/projected/672da34f-1e37-4e2c-b467-b5ee40c4a31b-kube-api-access-ktn2z\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.967651 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:44 crc kubenswrapper[4874]: E0217 16:03:44.967838 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:44 crc kubenswrapper[4874]: E0217 16:03:44.967913 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs podName:672da34f-1e37-4e2c-b467-b5ee40c4a31b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:45.467891484 +0000 UTC m=+35.762280075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs") pod "network-metrics-daemon-pm48m" (UID: "672da34f-1e37-4e2c-b467-b5ee40c4a31b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:44 crc kubenswrapper[4874]: I0217 16:03:44.968022 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.006769 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.007141 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktn2z\" (UniqueName: \"kubernetes.io/projected/672da34f-1e37-4e2c-b467-b5ee40c4a31b-kube-api-access-ktn2z\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.024945 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.025183 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.025204 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.025215 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.025230 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.025243 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.045445 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.063593 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.079134 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.091193 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.103622 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.127895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.127944 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.127963 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.127985 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.128002 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.231518 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.231575 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.231591 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.231617 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.231633 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.335070 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.335189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.335217 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.335253 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.335278 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.411262 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:19:57.747115375 +0000 UTC Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.438451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.438606 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.438632 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.438663 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.438687 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.476502 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:45 crc kubenswrapper[4874]: E0217 16:03:45.476676 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:45 crc kubenswrapper[4874]: E0217 16:03:45.476766 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs podName:672da34f-1e37-4e2c-b467-b5ee40c4a31b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:46.47674119 +0000 UTC m=+36.771129791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs") pod "network-metrics-daemon-pm48m" (UID: "672da34f-1e37-4e2c-b467-b5ee40c4a31b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.543726 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.543808 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.543828 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.543850 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.543869 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.646652 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.646713 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.646730 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.646756 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.646774 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.750002 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.750163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.750185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.750207 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.750225 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.853003 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.853060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.853100 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.853126 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.853144 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.955937 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.955998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.956015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.956041 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4874]: I0217 16:03:45.956058 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.058683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.058738 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.058755 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.058778 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.058796 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.161286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.161415 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.161442 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.161476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.161498 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.183232 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.183311 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.183394 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.183492 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.183494 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:04:02.183468461 +0000 UTC m=+52.477857052 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.183539 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:02.183528013 +0000 UTC m=+52.477916584 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.183621 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.183750 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:02.183722488 +0000 UTC m=+52.478111049 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.264565 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.264628 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.264646 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.264674 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.264693 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.284252 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.284348 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.284444 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.284477 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.284499 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.284578 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:02.28455301 +0000 UTC m=+52.578941611 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.284706 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.284772 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.284795 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.284934 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:02.284895929 +0000 UTC m=+52.579284630 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.367814 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.367882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.367898 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.367923 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.367943 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.411678 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:36:16.540557476 +0000 UTC Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.456445 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.456496 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.457060 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.456556 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.456514 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.457208 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.457402 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.457578 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.471558 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.471625 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.471644 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.471670 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.471690 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.487243 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.487605 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.487783 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs podName:672da34f-1e37-4e2c-b467-b5ee40c4a31b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:48.487721134 +0000 UTC m=+38.782109725 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs") pod "network-metrics-daemon-pm48m" (UID: "672da34f-1e37-4e2c-b467-b5ee40c4a31b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.575634 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.576106 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.576329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.576540 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.576732 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.679942 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.679992 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.680009 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.680033 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.680050 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.783481 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.783532 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.783548 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.783571 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.783589 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.845254 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.845307 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.845325 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.845350 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.845368 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.868145 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:46Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.874733 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.874785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.874809 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.874834 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.874853 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.896439 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:46Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.901476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.901523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.901540 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.901562 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.901579 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.922306 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:46Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.930381 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.930447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.930473 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.930517 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.930540 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.957384 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:46Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.962678 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.962734 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.962753 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.962776 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.962800 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.986334 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:46Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:46 crc kubenswrapper[4874]: E0217 16:03:46.986577 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.989124 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.989187 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.989206 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.989232 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4874]: I0217 16:03:46.989252 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.092483 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.092543 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.092560 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.092583 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.092601 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.195474 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.195546 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.195563 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.195588 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.195606 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.299537 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.299606 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.299624 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.300020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.300061 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.403804 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.403902 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.403918 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.403941 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.403959 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.411990 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:49:11.528534941 +0000 UTC Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.506789 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.506849 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.506866 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.506899 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.506921 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.610502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.610579 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.610598 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.610623 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.610644 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.713324 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.713393 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.713411 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.713435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.713456 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.816623 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.816702 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.816741 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.816774 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.816795 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.920016 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.920071 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.920112 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.920130 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4874]: I0217 16:03:47.920142 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.023922 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.023976 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.023990 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.024012 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.024028 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.126982 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.127047 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.127065 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.127118 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.127139 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.230835 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.230903 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.230921 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.230947 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.230967 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.334864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.334927 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.334945 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.334969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.334987 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.413128 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:42:40.172422718 +0000 UTC Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.438102 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.438158 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.438179 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.438204 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.438222 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.456941 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.456988 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.457004 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:48 crc kubenswrapper[4874]: E0217 16:03:48.457203 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.457254 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:48 crc kubenswrapper[4874]: E0217 16:03:48.457412 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:03:48 crc kubenswrapper[4874]: E0217 16:03:48.457604 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:48 crc kubenswrapper[4874]: E0217 16:03:48.457889 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.509795 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:48 crc kubenswrapper[4874]: E0217 16:03:48.510051 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:48 crc kubenswrapper[4874]: E0217 16:03:48.510167 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs podName:672da34f-1e37-4e2c-b467-b5ee40c4a31b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:52.510143686 +0000 UTC m=+42.804532287 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs") pod "network-metrics-daemon-pm48m" (UID: "672da34f-1e37-4e2c-b467-b5ee40c4a31b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.541153 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.541221 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.541241 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.541268 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.541287 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.644870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.644933 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.644950 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.644975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.644996 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.748120 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.748206 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.748231 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.748261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.748286 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.851599 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.851663 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.851690 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.851713 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.851732 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.956846 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.956950 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.956975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.957012 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4874]: I0217 16:03:48.957048 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.063527 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.063585 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.063598 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.063618 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.063631 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.166643 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.167098 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.167316 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.167517 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.167724 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.171983 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.172908 4874 scope.go:117] "RemoveContainer" containerID="0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2" Feb 17 16:03:49 crc kubenswrapper[4874]: E0217 16:03:49.173239 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\"" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.195726 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.219835 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.237831 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.267436 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.270386 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.270440 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.270457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.270478 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.270495 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.287640 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.308278 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.331555 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.368239 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.373037 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.373092 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.373104 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.373132 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.373145 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.401926 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.414231 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:18:41.11189576 +0000 UTC Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.422773 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.433066 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.447248 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.458209 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.475566 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.475625 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.475642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.475683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.475702 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.476153 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.491032 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.512666 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.533386 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.579229 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.579286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.579302 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.579326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.579341 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.682629 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.682691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.682714 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.682746 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.682766 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.785390 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.785449 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.785498 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.785521 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.785539 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.888484 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.888569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.888587 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.888613 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.888632 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.992056 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.992151 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.992173 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.992208 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4874]: I0217 16:03:49.992230 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.094650 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.094700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.094719 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.094742 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.094764 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.197730 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.197773 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.197788 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.197807 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.197819 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.301315 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.301369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.301385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.301410 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.301426 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.404555 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.404623 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.404643 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.404671 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.404692 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.414733 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:05:02.909786581 +0000 UTC Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.456432 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.456492 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:50 crc kubenswrapper[4874]: E0217 16:03:50.456627 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.456693 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.456863 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:50 crc kubenswrapper[4874]: E0217 16:03:50.456912 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:03:50 crc kubenswrapper[4874]: E0217 16:03:50.457055 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:50 crc kubenswrapper[4874]: E0217 16:03:50.457313 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.481578 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.502493 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.510626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.510689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.510708 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.510737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.510759 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.521938 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.545236 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.562435 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.598353 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.613461 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.613512 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.613529 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.613554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.613571 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.620416 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.641365 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.659787 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.694324 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.710255 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.716564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.716649 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.716673 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.716711 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.716731 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.724860 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.742806 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.764982 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.789042 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.811669 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.820435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.820495 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.820513 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.820563 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.820582 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.836786 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.924855 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.924931 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.924951 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.924991 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4874]: I0217 16:03:50.925012 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.028248 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.028314 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.028337 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.028369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.028387 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.131546 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.131587 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.131607 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.131628 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.131644 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.234912 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.234993 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.235011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.235048 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.235070 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.338658 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.338700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.338710 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.338725 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.338736 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.415185 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:54:54.625499774 +0000 UTC Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.441840 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.441908 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.441931 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.441959 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.441985 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.545832 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.545914 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.545934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.545967 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.545994 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.649392 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.649449 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.649465 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.649489 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.649509 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.753394 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.753451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.753468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.753496 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.753514 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.856613 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.856886 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.856907 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.856941 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.856964 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.960276 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.960347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.960365 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.960392 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4874]: I0217 16:03:51.960411 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.063690 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.063741 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.063757 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.063778 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.063835 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.167006 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.167063 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.167115 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.167146 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.167171 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.270484 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.270558 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.270581 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.270612 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.270638 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.374108 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.374178 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.374200 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.374226 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.374242 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.415905 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:37:07.610810325 +0000 UTC Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.456618 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.456700 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.456700 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.456801 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:52 crc kubenswrapper[4874]: E0217 16:03:52.456792 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:52 crc kubenswrapper[4874]: E0217 16:03:52.456949 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:52 crc kubenswrapper[4874]: E0217 16:03:52.457062 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:52 crc kubenswrapper[4874]: E0217 16:03:52.457215 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.477756 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.477824 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.477843 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.477866 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.477886 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.574708 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:52 crc kubenswrapper[4874]: E0217 16:03:52.575011 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:52 crc kubenswrapper[4874]: E0217 16:03:52.575194 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs podName:672da34f-1e37-4e2c-b467-b5ee40c4a31b nodeName:}" failed. No retries permitted until 2026-02-17 16:04:00.57515459 +0000 UTC m=+50.869543221 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs") pod "network-metrics-daemon-pm48m" (UID: "672da34f-1e37-4e2c-b467-b5ee40c4a31b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.585481 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.585540 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.585560 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.585585 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.585606 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.693799 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.693865 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.693882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.693907 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.693924 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.797359 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.797460 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.797485 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.797518 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.797553 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.901658 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.901734 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.901753 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.901785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4874]: I0217 16:03:52.901807 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.004742 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.004825 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.004849 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.004881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.004898 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.107899 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.107991 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.108016 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.108050 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.108130 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.211369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.211430 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.211446 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.211472 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.211492 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.314618 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.314691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.314716 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.314749 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.314773 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.416202 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:57:22.698919827 +0000 UTC Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.418259 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.418327 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.418350 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.418380 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.418403 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.521730 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.521789 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.521812 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.521840 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.521875 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.625145 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.625219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.625243 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.625272 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.625295 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.728022 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.728122 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.728152 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.728182 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.728204 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.831225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.831292 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.831317 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.831348 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.831372 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.934443 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.934501 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.934519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.934550 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4874]: I0217 16:03:53.934573 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.036848 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.036910 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.036927 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.036952 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.036970 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.140100 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.140144 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.140154 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.140172 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.140183 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.243723 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.243795 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.243813 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.243842 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.243863 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.347152 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.347216 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.347230 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.347248 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.347258 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.416412 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:58:43.549190896 +0000 UTC Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.449854 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.449924 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.449944 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.449969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.449986 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.457234 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.457297 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.457393 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:54 crc kubenswrapper[4874]: E0217 16:03:54.457557 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:54 crc kubenswrapper[4874]: E0217 16:03:54.457849 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:54 crc kubenswrapper[4874]: E0217 16:03:54.458003 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.458204 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:54 crc kubenswrapper[4874]: E0217 16:03:54.458354 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.554238 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.554308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.554328 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.554386 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.554408 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.658592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.658679 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.658700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.658741 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.658763 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.762166 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.762236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.762285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.762314 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.762332 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.867895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.867974 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.867996 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.868021 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.868040 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.971592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.971658 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.971678 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.971711 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4874]: I0217 16:03:54.971733 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.075468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.075557 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.075582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.075610 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.075629 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.178789 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.178832 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.178848 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.178874 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.178892 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.283325 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.283405 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.283425 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.283450 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.283469 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.387700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.388167 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.388315 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.388475 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.388603 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.417413 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:08:11.797504167 +0000 UTC Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.492051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.492147 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.492157 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.492179 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.492193 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.595187 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.595237 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.595247 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.595266 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.595281 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.698206 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.698266 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.698279 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.698304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.698318 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.801125 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.801161 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.801170 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.801183 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.801192 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.904781 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.904871 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.904896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.904930 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4874]: I0217 16:03:55.904954 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.008442 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.008524 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.008549 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.008581 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.008602 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.111823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.111918 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.111942 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.111969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.111987 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.215535 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.215596 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.215613 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.215640 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.215661 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.318528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.318592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.318611 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.318637 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.318653 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.418348 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:25:03.507891318 +0000 UTC Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.421010 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.421058 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.421112 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.421147 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.421169 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.456318 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.456363 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:56 crc kubenswrapper[4874]: E0217 16:03:56.456474 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.456486 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:56 crc kubenswrapper[4874]: E0217 16:03:56.456636 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:56 crc kubenswrapper[4874]: E0217 16:03:56.456795 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.457045 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:56 crc kubenswrapper[4874]: E0217 16:03:56.457233 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.524021 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.524119 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.524147 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.524175 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.524198 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.627116 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.627174 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.627199 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.627225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.627245 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.730223 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.730287 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.730313 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.730342 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.730364 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.833481 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.833558 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.833597 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.833625 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.833647 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.935803 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.935856 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.935864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.935880 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4874]: I0217 16:03:56.935889 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.038928 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.038996 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.039013 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.039041 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.039059 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.142223 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.142304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.142319 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.142344 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.142358 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.245684 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.245731 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.245744 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.245771 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.245788 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.320887 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.320990 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.321010 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.321040 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.321061 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: E0217 16:03:57.345603 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:57Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.355686 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.355737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.355757 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.355779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.355794 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: E0217 16:03:57.377540 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:57Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.382504 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.382554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.382571 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.382590 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.382606 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: E0217 16:03:57.405933 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:57Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.411537 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.411592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.411607 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.411631 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.411649 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.419099 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:44:49.618655751 +0000 UTC Feb 17 16:03:57 crc kubenswrapper[4874]: E0217 16:03:57.425942 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:57Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.430546 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.430581 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.430593 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.430610 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.430624 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: E0217 16:03:57.440439 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:57Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:57 crc kubenswrapper[4874]: E0217 16:03:57.440557 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.442139 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.442170 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.442179 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.442192 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.442204 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.545315 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.545414 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.545444 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.545476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.545497 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.649280 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.649349 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.649369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.649398 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.649418 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.753515 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.753584 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.753602 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.753624 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.753642 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.857545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.857619 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.857640 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.857671 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.857692 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.961067 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.961141 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.961152 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.961173 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4874]: I0217 16:03:57.961191 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.064468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.064530 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.064546 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.064569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.064587 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.168516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.168583 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.168612 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.168659 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.168687 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.271984 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.272053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.272071 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.272131 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.272150 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.375746 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.375818 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.375882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.375917 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.375939 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.420188 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:21:29.24761187 +0000 UTC Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.457366 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.457445 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:03:58 crc kubenswrapper[4874]: E0217 16:03:58.457533 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.457582 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.457378 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:58 crc kubenswrapper[4874]: E0217 16:03:58.457732 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:03:58 crc kubenswrapper[4874]: E0217 16:03:58.458008 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:58 crc kubenswrapper[4874]: E0217 16:03:58.458152 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.478161 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.478221 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.478239 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.478261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.478278 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.581827 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.581882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.581907 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.581935 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.581957 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.684827 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.684876 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.684887 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.684915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.684929 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.787920 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.787972 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.787990 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.788047 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.788147 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.891109 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.891188 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.891274 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.891308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.891331 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.993595 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.993657 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.993677 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.993700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4874]: I0217 16:03:58.993756 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.097123 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.097193 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.097210 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.097234 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.097252 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.200398 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.200455 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.200475 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.200499 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.200516 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.302791 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.302856 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.302873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.302899 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.302918 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.406588 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.406650 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.406668 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.406693 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.406713 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.421202 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:44:28.920115439 +0000 UTC Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.509340 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.509414 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.509433 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.509466 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.509487 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.613715 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.613823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.613853 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.613917 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.613954 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.718451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.718523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.718540 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.718567 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.718585 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.821547 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.821619 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.821641 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.821672 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.821694 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.924606 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.924682 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.924704 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.924732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4874]: I0217 16:03:59.924749 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.027033 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.027104 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.027118 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.027138 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.027151 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.129952 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.130012 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.130023 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.130040 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.130051 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.233418 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.233526 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.233543 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.233564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.233576 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.336711 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.336743 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.336752 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.336766 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.336776 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.421770 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:53:06.632141612 +0000 UTC Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.440354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.440468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.440518 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.440547 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.440596 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.456741 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.456770 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.456743 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:00 crc kubenswrapper[4874]: E0217 16:04:00.456952 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.456971 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:00 crc kubenswrapper[4874]: E0217 16:04:00.457133 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:00 crc kubenswrapper[4874]: E0217 16:04:00.457284 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:00 crc kubenswrapper[4874]: E0217 16:04:00.457368 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.476842 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.497303 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.519752 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.534534 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.543310 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.543349 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.543359 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.543373 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.543383 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.555752 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.567216 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.587621 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.604295 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.622474 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.640720 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.646516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.646601 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.646626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.646662 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.646696 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.659879 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.665598 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: E0217 16:04:00.668562 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:00 crc kubenswrapper[4874]: E0217 16:04:00.668692 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs podName:672da34f-1e37-4e2c-b467-b5ee40c4a31b nodeName:}" failed. No retries permitted until 2026-02-17 16:04:16.668661365 +0000 UTC m=+66.963049976 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs") pod "network-metrics-daemon-pm48m" (UID: "672da34f-1e37-4e2c-b467-b5ee40c4a31b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.668781 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.676108 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.681828 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.704153 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.718338 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.734529 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.749456 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.749500 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.749519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.749555 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.749574 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.750522 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.776000 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.791838 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.806526 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.822753 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.842850 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.851790 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.851851 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.851873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.851898 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.851917 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.859519 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.879721 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.894762 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.921462 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.935626 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.954895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.954954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.954974 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.955015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.955035 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.961803 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.980022 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:00 crc kubenswrapper[4874]: I0217 16:04:00.997483 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:00Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.016831 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:01Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.033926 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:01Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.050398 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:01Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.057928 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.057989 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.058010 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.058037 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.058056 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.069052 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:01Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.094654 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:01Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.111689 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:01Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.161241 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.161311 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.161358 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.161384 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.161402 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.264956 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.265002 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.265014 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.265030 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.265040 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.367432 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.367479 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.367494 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.367516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.367532 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.422866 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:49:13.420097476 +0000 UTC Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.470513 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.470582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.470598 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.470614 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.470626 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.573171 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.573226 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.573245 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.573270 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.573288 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.675926 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.675985 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.675997 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.676015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.676027 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.779106 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.779146 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.779158 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.779174 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.779186 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.882035 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.882128 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.882151 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.882182 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.882203 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.984608 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.984688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.984712 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.984743 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4874]: I0217 16:04:01.984767 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.087463 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.087519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.087536 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.087559 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.087579 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.190059 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.190138 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.190159 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.190216 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.190236 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.284407 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.284575 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.284764 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.285168 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:04:34.284593064 +0000 UTC m=+84.578981665 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.285253 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.285274 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:34.285243971 +0000 UTC m=+84.579632572 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.285324 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.285426 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.285479 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:34.285465067 +0000 UTC m=+84.579853638 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.285484 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.285513 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.285532 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.285584 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:34.28556861 +0000 UTC m=+84.579957221 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.293555 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.293594 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.293608 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.293627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.293642 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.385983 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.386195 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.386215 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.386227 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.386268 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:34.386255088 +0000 UTC m=+84.680643659 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.395508 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.395538 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.395549 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.395565 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.395579 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.423592 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:22:43.935733232 +0000 UTC Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.457354 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.457466 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.457461 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.457682 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.457662 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.457859 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.457973 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:02 crc kubenswrapper[4874]: E0217 16:04:02.458186 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.460382 4874 scope.go:117] "RemoveContainer" containerID="0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.499229 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.499449 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.499465 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.499487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.499502 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.601596 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.601659 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.601677 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.601703 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.601722 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.704550 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.704588 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.704597 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.704614 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.704624 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.808580 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.808623 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.808776 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.808805 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.808934 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.828783 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/1.log" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.832840 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.837223 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.846768 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.857545 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.869190 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.880308 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.895228 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.907190 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.911935 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.911976 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.911995 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.912020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.912037 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.922688 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.940385 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.953526 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.966853 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.980334 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4874]: I0217 16:04:02.990613 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.003919 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.014785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.014876 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.014902 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.014937 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.014965 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.023675 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.035349 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.051653 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.072964 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.087848 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.117398 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.117447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.117456 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.117473 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.117485 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.219795 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.219839 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.219852 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.219869 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.219885 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.322487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.322527 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.322539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.322553 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.322564 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.958298 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.958346 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:03 crc kubenswrapper[4874]: E0217 16:04:03.958532 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.958600 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.958714 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:03 crc kubenswrapper[4874]: E0217 16:04:03.958802 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:03 crc kubenswrapper[4874]: E0217 16:04:03.958902 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:03 crc kubenswrapper[4874]: E0217 16:04:03.959049 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.959156 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:13:18.718273294 +0000 UTC Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.961493 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.961519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.961528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.961541 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4874]: I0217 16:04:03.961552 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.063899 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.063938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.063949 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.063963 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.063976 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.167347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.167405 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.167424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.167447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.167465 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.270297 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.270369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.270390 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.270422 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.270443 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.374311 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.374383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.374408 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.374441 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.374466 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.477823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.477897 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.477918 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.477941 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.477968 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.582146 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.582185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.582196 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.582214 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.582225 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.685037 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.685091 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.685102 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.685150 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.685161 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.787684 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.787752 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.787763 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.787780 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.787792 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.889872 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.889934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.889951 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.889974 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.889991 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.959846 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:15:37.599197688 +0000 UTC Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.967660 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/2.log" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.968587 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/1.log" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.971603 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05" exitCode=1 Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.971641 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05"} Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.971715 4874 scope.go:117] "RemoveContainer" containerID="0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.972645 4874 scope.go:117] "RemoveContainer" containerID="e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05" Feb 17 16:04:04 crc kubenswrapper[4874]: E0217 16:04:04.972914 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\"" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.993242 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.993318 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.993343 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.993372 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4874]: I0217 16:04:04.993394 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.005008 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200766 6567 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200845 6567 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:04.200872 6567 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:04.200887 6567 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200914 6567 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:04.200931 6567 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:04.201178 6567 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200787 6567 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200898 6567 factory.go:656] Stopping watch factory\\\\nI0217 16:04:04.201400 6567 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.201753 6567 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.021476 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.041368 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.058157 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.079969 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.096989 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.097135 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.097168 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.097247 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.097279 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.103823 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.119803 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.143048 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.165244 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.197533 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.199932 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.199993 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.200015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.200127 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.200207 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.215581 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.228880 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.244066 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.262513 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.276986 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.294680 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.303683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.303732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.303748 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.303771 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.303787 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.309623 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.324270 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:05Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.406166 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.406216 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.406228 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.406246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.406261 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.457203 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.457230 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.457230 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.457270 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:05 crc kubenswrapper[4874]: E0217 16:04:05.457436 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:05 crc kubenswrapper[4874]: E0217 16:04:05.457550 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:05 crc kubenswrapper[4874]: E0217 16:04:05.457768 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:05 crc kubenswrapper[4874]: E0217 16:04:05.457800 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.509023 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.509116 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.509197 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.509230 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.509254 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.612056 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.612176 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.612199 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.612228 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.612251 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.715339 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.715407 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.715435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.715464 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.715487 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.819397 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.819499 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.819527 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.819562 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.819598 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.922940 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.923018 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.923035 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.923057 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.923108 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.960448 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:22:48.522807311 +0000 UTC Feb 17 16:04:05 crc kubenswrapper[4874]: I0217 16:04:05.977912 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/2.log" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.025804 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.025871 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.025890 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.025916 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.025933 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.129311 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.129370 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.129388 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.129410 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.129427 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.232456 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.232524 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.232548 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.232580 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.232609 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.335827 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.335877 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.335891 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.335912 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.335924 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.438848 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.438886 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.438898 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.438916 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.438929 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.541386 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.541451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.541469 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.541493 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.541523 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.644433 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.644468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.644490 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.644505 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.644517 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.747286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.747354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.747375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.747401 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.747419 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.850494 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.850556 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.850580 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.850605 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.850624 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.953895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.953936 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.953949 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.953969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.953982 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4874]: I0217 16:04:06.960622 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:17:56.656909948 +0000 UTC Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.057122 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.057218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.057246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.057280 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.057307 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.160667 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.160723 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.160738 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.160761 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.160780 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.263339 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.263387 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.263398 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.263447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.263461 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.366411 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.366472 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.366491 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.366516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.366534 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.456897 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.456944 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.456965 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.457007 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.457150 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.457357 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.457450 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.457556 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.469818 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.469879 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.469897 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.469921 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.469940 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.572815 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.572873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.572893 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.572915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.572931 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.627825 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.627876 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.627892 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.627915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.627936 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.646229 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:07Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.651163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.651219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.651240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.651262 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.651279 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.672527 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:07Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.677073 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.677145 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.677164 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.677187 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.677203 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.696899 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:07Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.710106 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.710162 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.710175 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.710192 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.710205 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.730212 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:07Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.735170 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.735325 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.735355 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.735385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.735404 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.768110 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:07Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:07 crc kubenswrapper[4874]: E0217 16:04:07.768521 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.777550 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.777617 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.777644 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.777676 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.777699 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.880891 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.880944 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.880961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.880985 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.881005 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.961596 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:55:14.555749408 +0000 UTC Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.984158 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.984218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.984236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.984261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4874]: I0217 16:04:07.984309 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.087185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.087260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.087277 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.087304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.087327 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.190619 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.190665 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.190676 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.190696 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.190707 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.293693 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.293753 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.293764 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.293786 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.293805 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.395908 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.395953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.395963 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.395981 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.395994 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.499297 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.499371 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.499402 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.499435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.499459 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.602213 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.602270 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.602281 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.602305 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.602319 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.704478 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.704518 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.704528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.704542 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.704552 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.807249 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.807310 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.807328 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.807353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.807370 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.910390 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.910442 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.910458 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.910480 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.910523 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4874]: I0217 16:04:08.962000 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:23:18.456566361 +0000 UTC Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.013012 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.013090 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.013101 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.013122 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.013134 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.116034 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.116107 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.116120 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.116140 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.116157 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.219405 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.219455 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.219470 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.219493 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.219511 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.322156 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.322201 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.322215 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.322236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.322252 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.425319 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.425401 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.425485 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.425521 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.425545 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.456980 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.457037 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.456980 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.457143 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:09 crc kubenswrapper[4874]: E0217 16:04:09.457278 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:09 crc kubenswrapper[4874]: E0217 16:04:09.457412 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:09 crc kubenswrapper[4874]: E0217 16:04:09.457667 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:09 crc kubenswrapper[4874]: E0217 16:04:09.457791 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.528502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.528564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.528585 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.528610 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.528625 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.632453 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.632506 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.632523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.632545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.632558 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.735344 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.735397 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.735415 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.735440 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.735459 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.839228 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.839286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.839303 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.839326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.839344 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.943159 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.943258 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.943279 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.943304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.943322 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4874]: I0217 16:04:09.962941 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:58:05.715717527 +0000 UTC Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.046695 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.046779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.046798 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.046821 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.046838 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.150454 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.150517 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.150535 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.150558 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.150576 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.253888 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.253964 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.253988 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.254059 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.254131 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.358609 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.358686 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.358708 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.358737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.358760 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.471221 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.471272 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.471300 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.471318 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.471333 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.487363 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.503777 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.525431 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.539890 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.556939 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.568262 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.573579 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.573644 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.573663 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.573688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.573708 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.590702 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.604976 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.616595 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.630033 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.645451 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.656265 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.666771 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.676355 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.676383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.676391 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.676403 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.676412 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.686335 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0003e5b17ab3dcbee6a37d4685d1271400d8a09a01fd4ab431654fc597826eb2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"message\\\":\\\"Family:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 16:03:41.827604 6302 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 16:03:41.827679 6302 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared info\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200766 6567 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200845 6567 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:04.200872 6567 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:04.200887 6567 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200914 6567 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:04.200931 6567 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:04.201178 6567 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200787 6567 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200898 6567 factory.go:656] Stopping watch factory\\\\nI0217 16:04:04.201400 6567 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.201753 6567 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.698374 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.710754 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.720884 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.734780 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:10Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.779322 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.779371 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.779383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.779400 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.779414 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.881782 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.881905 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.881938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.882025 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.882058 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.963595 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:04:55.442722632 +0000 UTC Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.985286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.985370 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.985389 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.985413 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4874]: I0217 16:04:10.985432 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.088502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.088576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.088598 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.088625 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.088646 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.192205 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.192269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.192286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.192321 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.192339 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.295191 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.295253 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.295271 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.295295 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.295314 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.398358 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.398437 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.398461 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.398488 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.398513 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.456867 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.456901 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:11 crc kubenswrapper[4874]: E0217 16:04:11.457278 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.457715 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:11 crc kubenswrapper[4874]: E0217 16:04:11.457914 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:11 crc kubenswrapper[4874]: E0217 16:04:11.458208 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.458253 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:11 crc kubenswrapper[4874]: E0217 16:04:11.458405 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.502602 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.502687 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.502715 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.502750 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.502775 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.605482 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.605547 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.605566 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.605593 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.605622 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.709367 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.709489 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.709515 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.709542 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.709560 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.812182 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.812563 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.812576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.812594 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.812606 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.915930 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.916022 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.916046 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.916070 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.916142 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4874]: I0217 16:04:11.964007 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:11:02.446067691 +0000 UTC Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.018982 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.019042 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.019060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.019163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.019196 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.126322 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.126360 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.126369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.126385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.126395 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.230406 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.230451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.230468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.230490 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.230507 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.333764 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.333839 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.333857 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.333883 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.333903 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.436818 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.436877 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.436926 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.437179 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.437198 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.540475 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.540535 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.540558 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.540587 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.540607 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.643229 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.643282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.643299 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.643322 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.643339 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.745964 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.746020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.746031 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.746048 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.746059 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.849470 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.849548 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.849571 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.849605 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.849627 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.953603 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.953659 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.953676 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.953704 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.953722 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4874]: I0217 16:04:12.964952 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:40:28.289421326 +0000 UTC Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.056813 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.056878 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.056902 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.056937 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.056961 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.160347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.160409 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.160428 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.160452 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.160469 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.262802 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.262859 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.262877 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.262899 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.262919 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.366134 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.366228 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.366249 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.366274 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.366291 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.456600 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.456639 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.456631 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.456615 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:13 crc kubenswrapper[4874]: E0217 16:04:13.456783 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:13 crc kubenswrapper[4874]: E0217 16:04:13.456880 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:13 crc kubenswrapper[4874]: E0217 16:04:13.456972 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:13 crc kubenswrapper[4874]: E0217 16:04:13.457053 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.469506 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.469583 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.469606 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.469636 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.469659 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.580545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.580628 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.580653 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.580682 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.580702 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.683576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.683658 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.683683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.683709 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.683727 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.786464 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.786527 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.786546 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.786571 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.786594 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.889514 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.889619 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.889638 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.889700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.889720 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.965834 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:25:58.939719813 +0000 UTC Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.993779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.993834 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.993855 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.993888 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4874]: I0217 16:04:13.993912 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.097230 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.097287 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.097300 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.097330 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.097345 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.200070 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.200153 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.200170 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.200192 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.200209 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.303492 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.303568 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.303594 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.303625 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.303645 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.407221 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.407287 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.407307 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.407332 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.407351 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.510326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.510415 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.510439 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.510478 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.510503 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.613052 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.613215 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.613471 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.613505 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.613521 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.716560 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.716629 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.716651 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.716675 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.716701 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.823916 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.824015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.824046 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.824154 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.824178 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.928759 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.928803 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.928812 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.928828 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.928836 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4874]: I0217 16:04:14.966463 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:00:03.915095118 +0000 UTC Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.030914 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.030958 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.030966 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.030981 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.030990 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.134427 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.134519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.134539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.134574 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.134601 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.237961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.238020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.238035 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.238065 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.238109 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.341501 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.341561 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.341578 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.341604 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.341622 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.444306 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.444358 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.444371 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.444388 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.444400 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.456941 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.457004 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.457013 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.457128 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:15 crc kubenswrapper[4874]: E0217 16:04:15.457283 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:15 crc kubenswrapper[4874]: E0217 16:04:15.457403 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:15 crc kubenswrapper[4874]: E0217 16:04:15.457619 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:15 crc kubenswrapper[4874]: E0217 16:04:15.457767 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.547264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.547419 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.547441 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.547497 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.547516 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.650916 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.650983 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.651001 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.651028 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.651048 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.754612 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.754674 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.754691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.754715 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.754734 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.857176 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.857240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.857258 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.857282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.857301 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.960099 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.960147 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.960160 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.960179 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.960192 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4874]: I0217 16:04:15.967334 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:13:57.49481345 +0000 UTC Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.063155 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.063219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.063237 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.063260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.063277 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.166237 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.166281 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.166290 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.166305 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.166315 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.268557 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.268626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.268651 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.268683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.268705 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.371007 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.371054 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.371070 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.371112 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.371128 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.472486 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.472523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.472535 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.472549 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.472559 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.575601 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.575666 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.575692 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.575720 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.575740 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.678334 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.678381 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.678398 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.678420 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.678439 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.748618 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:16 crc kubenswrapper[4874]: E0217 16:04:16.748919 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:16 crc kubenswrapper[4874]: E0217 16:04:16.749072 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs podName:672da34f-1e37-4e2c-b467-b5ee40c4a31b nodeName:}" failed. No retries permitted until 2026-02-17 16:04:48.749016663 +0000 UTC m=+99.043405264 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs") pod "network-metrics-daemon-pm48m" (UID: "672da34f-1e37-4e2c-b467-b5ee40c4a31b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.782685 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.782766 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.782783 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.782807 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.782827 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.885511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.885566 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.885576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.885598 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.885609 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.967748 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:08:26.893763281 +0000 UTC Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.988039 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.988146 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.988174 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.988208 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4874]: I0217 16:04:16.988233 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.091096 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.091142 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.091150 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.091164 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.091174 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.194158 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.194212 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.194224 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.194239 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.194256 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.297694 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.297749 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.297766 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.297790 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.297809 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.401373 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.401418 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.401434 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.401455 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.401473 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.457369 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.457487 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:17 crc kubenswrapper[4874]: E0217 16:04:17.457587 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.457776 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.457807 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:17 crc kubenswrapper[4874]: E0217 16:04:17.457951 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:17 crc kubenswrapper[4874]: E0217 16:04:17.458160 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:17 crc kubenswrapper[4874]: E0217 16:04:17.458237 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.458833 4874 scope.go:117] "RemoveContainer" containerID="e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05" Feb 17 16:04:17 crc kubenswrapper[4874]: E0217 16:04:17.459122 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\"" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.474905 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.490213 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.504127 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.504187 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.504205 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.504227 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.504244 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.511326 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.531505 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.548527 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.570504 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.594210 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.607719 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.607806 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.607827 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.607857 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.607877 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.613994 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.628694 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.651906 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.663533 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.684531 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.701838 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.710886 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.710923 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.710934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.710948 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.710959 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.715712 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.728539 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.739660 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.761095 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200766 6567 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200845 6567 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:04.200872 6567 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:04.200887 6567 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200914 6567 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:04.200931 6567 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:04.201178 6567 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200787 6567 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200898 6567 factory.go:656] Stopping watch factory\\\\nI0217 16:04:04.201400 6567 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.201753 6567 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.772052 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.813636 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.813673 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.813684 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.813701 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.813713 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.916297 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.916424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.916506 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.916602 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.916694 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.917752 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.917811 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.917826 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.917853 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.917867 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: E0217 16:04:17.937436 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.941576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.941611 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.941622 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.941639 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.941651 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: E0217 16:04:17.959914 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.963570 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.963604 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.963631 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.963649 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.963661 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.968412 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:35:43.179127305 +0000 UTC Feb 17 16:04:17 crc kubenswrapper[4874]: E0217 16:04:17.985494 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:17Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.989117 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.989185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.989204 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.989232 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4874]: I0217 16:04:17.989251 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: E0217 16:04:18.004226 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:18Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.015257 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.015309 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.015327 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.015353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.015372 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: E0217 16:04:18.038617 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:18Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:18 crc kubenswrapper[4874]: E0217 16:04:18.038863 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.040942 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.040998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.041019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.041048 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.041069 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.145708 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.145761 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.145774 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.145792 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.145804 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.248784 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.248844 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.248857 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.248880 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.248895 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.352176 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.352238 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.352264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.352293 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.352314 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.455098 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.455140 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.455149 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.455169 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.455179 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.556953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.556988 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.556998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.557011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.557020 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.659463 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.659516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.659524 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.659539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.659547 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.763009 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.763053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.763105 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.763122 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.763133 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.865450 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.865489 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.865497 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.865512 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.865520 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.967817 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.967862 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.967873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.967889 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4874]: I0217 16:04:18.967900 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.005896 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:13:31.78209372 +0000 UTC Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.026958 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/0.log" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.027019 4874 generic.go:334] "Generic (PLEG): container finished" podID="8aedd049-0029-44f7-869f-4a3ccdce8413" containerID="0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24" exitCode=1 Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.027056 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2vkxj" event={"ID":"8aedd049-0029-44f7-869f-4a3ccdce8413","Type":"ContainerDied","Data":"0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.027441 4874 scope.go:117] "RemoveContainer" containerID="0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.053310 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200766 6567 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200845 6567 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:04.200872 6567 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:04.200887 6567 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200914 6567 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:04.200931 6567 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:04.201178 6567 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200787 6567 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200898 6567 factory.go:656] Stopping watch factory\\\\nI0217 16:04:04.201400 6567 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.201753 6567 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.069416 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.070516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.070545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.070553 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.070566 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.070575 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.087377 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.100828 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.113038 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae\\\\n2026-02-17T16:03:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae to /host/opt/cni/bin/\\\\n2026-02-17T16:03:33Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:33Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.125838 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.142389 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.157360 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.170021 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.174110 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.174170 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.174181 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.174198 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.174212 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.188229 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.198585 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.211205 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.222448 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.235071 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.251557 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.267546 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.277225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.277307 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.277322 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.277340 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.277352 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.279501 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.292142 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.379938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.379980 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.379991 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.380008 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.380018 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.456501 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:19 crc kubenswrapper[4874]: E0217 16:04:19.456627 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.456694 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:19 crc kubenswrapper[4874]: E0217 16:04:19.456736 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.456779 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:19 crc kubenswrapper[4874]: E0217 16:04:19.456846 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.456902 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:19 crc kubenswrapper[4874]: E0217 16:04:19.456956 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.482391 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.482428 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.482440 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.482456 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.482467 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.585908 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.586009 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.586127 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.586174 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.586254 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.689222 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.689328 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.689352 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.689407 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.689427 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.791758 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.791803 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.791816 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.791881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.791899 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.893991 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.894018 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.894026 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.894039 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.894049 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.996219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.996243 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.996253 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.996264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4874]: I0217 16:04:19.996272 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.006669 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:27:06.728243852 +0000 UTC Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.032824 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/0.log" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.032930 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2vkxj" event={"ID":"8aedd049-0029-44f7-869f-4a3ccdce8413","Type":"ContainerStarted","Data":"00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.053682 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.068553 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.082726 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.094200 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.099062 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.099132 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.099147 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.099171 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.099184 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.106949 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.120154 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.135196 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.146295 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.156264 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.187250 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200766 6567 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200845 6567 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:04.200872 6567 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:04.200887 6567 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200914 6567 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:04.200931 6567 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:04.201178 6567 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200787 6567 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200898 6567 factory.go:656] Stopping watch factory\\\\nI0217 16:04:04.201400 6567 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.201753 6567 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.200742 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.201872 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.201894 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.201902 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.201916 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.201925 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.213504 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.225331 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.239691 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae\\\\n2026-02-17T16:03:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae to /host/opt/cni/bin/\\\\n2026-02-17T16:03:33Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:33Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.252338 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.263537 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.276320 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.289596 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.304296 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.304314 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.304326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.304339 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.304348 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.407189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.407233 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.407246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.407262 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.407275 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.473540 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.487843 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.501059 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.509424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.509473 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.509489 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.509511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.509527 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.513431 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.523353 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.532428 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.540527 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.549571 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.558049 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.569822 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.579407 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.595705 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.604567 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.613690 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.613745 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.613759 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.613785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.613797 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.619849 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200766 6567 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200845 6567 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:04.200872 6567 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:04.200887 6567 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200914 6567 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:04.200931 6567 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:04.201178 6567 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200787 6567 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200898 6567 factory.go:656] Stopping watch factory\\\\nI0217 16:04:04.201400 6567 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.201753 6567 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.628572 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.636320 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.646010 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae\\\\n2026-02-17T16:03:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae to /host/opt/cni/bin/\\\\n2026-02-17T16:03:33Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:33Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.655642 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:20Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.717139 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.717228 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.717254 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.717290 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.717314 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.820269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.820510 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.820602 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.820683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.820775 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.923266 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.923330 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.923341 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.923363 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4874]: I0217 16:04:20.923376 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.007453 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:08:16.798671338 +0000 UTC Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.025678 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.025706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.025717 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.025733 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.025744 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.127650 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.127684 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.127695 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.127711 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.127723 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.230522 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.230576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.230587 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.230604 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.230617 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.333523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.333567 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.333576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.333590 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.333599 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.435574 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.435615 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.435623 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.435635 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.435644 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.456865 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.456897 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.456940 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.456954 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:21 crc kubenswrapper[4874]: E0217 16:04:21.457043 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:21 crc kubenswrapper[4874]: E0217 16:04:21.457215 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:21 crc kubenswrapper[4874]: E0217 16:04:21.457312 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:21 crc kubenswrapper[4874]: E0217 16:04:21.457411 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.537636 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.537694 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.537711 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.537731 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.537749 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.639938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.639987 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.639999 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.640016 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.640028 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.742726 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.742947 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.742958 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.742973 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.742982 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.844960 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.845023 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.845040 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.845064 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.845103 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.947775 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.947818 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.947829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.947845 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4874]: I0217 16:04:21.947858 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.008537 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:27:08.200440768 +0000 UTC Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.051048 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.051140 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.051159 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.051183 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.051201 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.154341 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.154390 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.154406 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.154426 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.154440 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.256781 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.256836 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.256852 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.256873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.256889 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.359821 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.359879 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.359896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.359921 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.359941 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.462881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.462940 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.462965 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.462993 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.463017 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.565999 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.566155 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.566174 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.566204 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.566221 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.668805 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.668837 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.668845 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.668857 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.668866 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.770596 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.770621 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.770629 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.770640 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.770649 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.872880 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.872922 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.872934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.872948 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.872958 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.975365 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.975401 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.975409 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.975424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4874]: I0217 16:04:22.975434 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.009190 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:38:14.436933983 +0000 UTC Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.077600 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.077635 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.077644 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.077657 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.077667 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.179832 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.179910 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.179931 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.179962 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.179984 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.282413 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.282457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.282468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.282483 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.282493 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.384785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.384831 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.384845 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.384861 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.384873 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.456536 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.456579 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.456566 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.456547 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:23 crc kubenswrapper[4874]: E0217 16:04:23.456720 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:23 crc kubenswrapper[4874]: E0217 16:04:23.456819 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:23 crc kubenswrapper[4874]: E0217 16:04:23.456898 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:23 crc kubenswrapper[4874]: E0217 16:04:23.456985 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.486783 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.486811 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.486821 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.486837 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.486847 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.589173 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.589226 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.589236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.589253 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.589271 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.691366 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.691413 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.691426 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.691443 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.691455 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.793248 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.793288 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.793297 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.793313 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.793322 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.895626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.895681 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.895698 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.895719 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4874]: I0217 16:04:23.895734 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.010334 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:07:48.170531182 +0000 UTC Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.024723 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.024759 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.024771 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.024787 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.024797 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.126333 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.126375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.126386 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.126403 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.126413 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.228535 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.228570 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.228578 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.228591 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.228600 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.331039 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.331132 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.331154 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.331174 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.331192 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.433458 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.433493 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.433502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.433537 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.433548 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.535793 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.535851 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.535868 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.535897 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.535915 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.638919 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.638969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.638985 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.639006 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.639022 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.741068 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.741134 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.741144 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.741160 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.741172 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.843286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.843318 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.843328 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.843345 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.843354 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.945512 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.945585 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.945608 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.945642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4874]: I0217 16:04:24.945666 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.010959 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:47:36.982667859 +0000 UTC Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.049400 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.049450 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.049468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.049492 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.049509 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.151454 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.151482 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.151491 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.151506 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.151515 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.253967 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.254002 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.254012 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.254051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.254069 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.355640 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.355673 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.355681 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.355696 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.355705 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.456565 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:25 crc kubenswrapper[4874]: E0217 16:04:25.456657 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.456574 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.456564 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:25 crc kubenswrapper[4874]: E0217 16:04:25.456845 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.456805 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:25 crc kubenswrapper[4874]: E0217 16:04:25.456926 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:25 crc kubenswrapper[4874]: E0217 16:04:25.456967 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.458286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.458351 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.458374 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.458402 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.458424 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.561060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.561174 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.561198 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.561227 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.561249 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.663582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.663655 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.663679 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.663727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.663753 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.766066 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.766136 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.766153 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.766173 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.766189 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.867833 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.868053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.868062 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.868097 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.868106 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.969840 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.969870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.969878 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.969889 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4874]: I0217 16:04:25.969898 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.011220 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 02:57:21.459336368 +0000 UTC Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.072257 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.072286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.072295 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.072308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.072317 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.174682 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.174719 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.174727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.174739 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.174748 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.277282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.277324 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.277332 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.277345 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.277355 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.379697 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.379774 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.379788 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.379805 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.379817 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.473510 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.483280 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.483338 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.483356 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.483379 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.483396 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.585459 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.585523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.585540 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.585563 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.585581 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.688287 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.688349 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.688367 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.688385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.688397 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.791515 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.791582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.791599 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.791622 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.791641 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.894406 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.894448 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.894457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.894471 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.894481 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.997689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.997771 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.997789 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.997813 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4874]: I0217 16:04:26.997831 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.011929 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:51:59.120901892 +0000 UTC Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.100617 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.100716 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.100727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.100748 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.100759 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.204039 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.204116 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.204138 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.204163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.204182 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.307529 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.307577 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.307592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.307624 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.307642 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.411290 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.411392 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.411414 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.411445 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.411470 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.456736 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.456836 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.456762 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.456831 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:27 crc kubenswrapper[4874]: E0217 16:04:27.456970 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:27 crc kubenswrapper[4874]: E0217 16:04:27.457147 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:27 crc kubenswrapper[4874]: E0217 16:04:27.457301 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:27 crc kubenswrapper[4874]: E0217 16:04:27.457408 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.514920 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.515001 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.515020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.515043 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.515061 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.618337 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.618393 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.618409 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.618433 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.618450 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.721582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.721626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.721638 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.721654 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.721678 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.824326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.824392 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.824405 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.824429 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.824448 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.927621 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.927672 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.927687 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.927706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4874]: I0217 16:04:27.927718 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.012609 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:19:48.715148349 +0000 UTC Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.031254 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.031333 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.031354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.031443 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.031470 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.066404 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.066469 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.066488 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.066514 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.066536 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: E0217 16:04:28.086014 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:28Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.091370 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.091430 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.091446 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.091469 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.091486 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: E0217 16:04:28.111660 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:28Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.115920 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.115975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.115991 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.116015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.116032 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: E0217 16:04:28.132555 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:28Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.136811 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.136864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.136885 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.136908 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.136928 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: E0217 16:04:28.151799 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:28Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.157052 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.157170 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.157188 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.157212 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.157231 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: E0217 16:04:28.174067 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:28Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:28 crc kubenswrapper[4874]: E0217 16:04:28.174341 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.176369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.176424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.176448 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.176479 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.176501 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.279877 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.279945 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.279968 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.279996 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.280018 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.387368 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.387482 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.387504 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.387531 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.387558 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.457689 4874 scope.go:117] "RemoveContainer" containerID="e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.498527 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.498578 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.498597 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.498621 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.498638 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.604001 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.604155 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.604177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.604204 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.604224 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.706295 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.706339 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.706350 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.706366 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.706377 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.810376 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.810425 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.810442 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.810463 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.810480 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.914101 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.914193 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.914217 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.914639 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4874]: I0217 16:04:28.914686 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.013189 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:11:21.495550275 +0000 UTC Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.017180 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.017253 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.017270 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.017295 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.017311 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.065153 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/2.log" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.069109 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.069614 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.085718 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.119880 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.119918 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.119932 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.119948 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.119959 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.130878 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.150547 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.172453 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.191231 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.209208 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.221142 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.223473 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.223523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.223534 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.223554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.223569 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.241042 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.251172 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.267635 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200766 6567 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200845 6567 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:04.200872 6567 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:04.200887 6567 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200914 6567 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:04.200931 6567 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:04.201178 6567 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200787 6567 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200898 6567 factory.go:656] Stopping watch factory\\\\nI0217 16:04:04.201400 6567 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.201753 6567 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.279582 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.289522 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe15a814-ad3e-42e0-b991-3f30ed1ef47f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff60094f799686e9f12b1d1205a7ef68133f64cbb1c81000a87bc68ead3ab93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.300241 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.312541 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae\\\\n2026-02-17T16:03:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae to /host/opt/cni/bin/\\\\n2026-02-17T16:03:33Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:33Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.324449 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.326167 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.326218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.326230 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.326252 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.326268 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.336571 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.349709 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.367143 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.384987 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.429859 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.429909 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.429920 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.429941 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.429955 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.456810 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.456844 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.456850 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.456894 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:29 crc kubenswrapper[4874]: E0217 16:04:29.457016 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:29 crc kubenswrapper[4874]: E0217 16:04:29.457140 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:29 crc kubenswrapper[4874]: E0217 16:04:29.457241 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:29 crc kubenswrapper[4874]: E0217 16:04:29.457347 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.532941 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.532974 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.532982 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.532998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.533008 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.636306 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.636346 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.636354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.636370 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.636381 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.744619 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.744678 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.744689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.744706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.744718 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.847837 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.847898 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.847916 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.847941 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.847958 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.951765 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.951824 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.951840 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.951863 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4874]: I0217 16:04:29.951881 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.014177 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:24:47.360112584 +0000 UTC Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.055276 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.055324 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.055341 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.055364 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.055381 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.087342 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/3.log" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.088433 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/2.log" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.091967 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb" exitCode=1 Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.092036 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.092155 4874 scope.go:117] "RemoveContainer" containerID="e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.093052 4874 scope.go:117] "RemoveContainer" containerID="82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb" Feb 17 16:04:30 crc kubenswrapper[4874]: E0217 16:04:30.093354 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\"" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.115983 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.133046 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.155918 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae\\\\n2026-02-17T16:03:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae to /host/opt/cni/bin/\\\\n2026-02-17T16:03:33Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:33Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.157727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.157779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.157797 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.157819 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.157836 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.178552 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.199241 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.224369 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.246725 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.261858 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.261914 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.261934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.261962 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.261984 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.280249 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.299986 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.321783 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.345891 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.365761 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.366304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.366357 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.366371 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.366391 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.366409 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.381821 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.404641 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.421474 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.440643 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.457473 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe15a814-ad3e-42e0-b991-3f30ed1ef47f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff60094f799686e9f12b1d1205a7ef68133f64cbb1c81000a87bc68ead3ab93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.469649 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.469721 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.469738 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.469760 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.469779 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.489977 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200766 6567 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200845 6567 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:04.200872 6567 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:04.200887 6567 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200914 6567 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:04.200931 6567 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:04.201178 6567 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200787 6567 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200898 6567 factory.go:656] Stopping watch factory\\\\nI0217 16:04:04.201400 6567 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.201753 6567 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 16:04:29.417284 6919 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-2vkxj in node crc\\\\nI0217 16:04:29.417291 6919 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-2vkxj after 0 failed attempt(s)\\\\nI0217 16:04:29.417305 6919 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-2vkxj\\\\nF0217 16:04:29.415954 6919 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.509970 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.534408 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.557352 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.572465 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.572510 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.572526 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.572552 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.572570 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.576281 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.611819 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.629892 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.644574 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.662723 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.675177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.675225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.675241 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.675264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.675282 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.681268 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.698392 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.713533 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe15a814-ad3e-42e0-b991-3f30ed1ef47f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff60094f799686e9f12b1d1205a7ef68133f64cbb1c81000a87bc68ead3ab93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.744693 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e31ebf1c54fc6c7de8c5d16a7c068d4f2ab0bde28ae20bef38f727972e140c05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200766 6567 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200845 6567 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:04.200872 6567 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:04.200887 6567 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200914 6567 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:04.200931 6567 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:04.201178 6567 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200787 6567 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.200898 6567 factory.go:656] Stopping watch factory\\\\nI0217 16:04:04.201400 6567 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:04.201753 6567 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 16:04:29.417284 6919 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-2vkxj in node crc\\\\nI0217 16:04:29.417291 6919 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-2vkxj after 0 failed attempt(s)\\\\nI0217 16:04:29.417305 6919 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-2vkxj\\\\nF0217 16:04:29.415954 6919 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.763707 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.777827 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.777887 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.777907 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.777931 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.777948 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.781065 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.797516 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.817208 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae\\\\n2026-02-17T16:03:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae to /host/opt/cni/bin/\\\\n2026-02-17T16:03:33Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:33Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.837297 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.854867 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.874641 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.880273 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.880368 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.880390 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.880416 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.880435 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.895970 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.983802 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.983862 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.983881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.983905 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4874]: I0217 16:04:30.983923 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.014653 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:18:33.837399149 +0000 UTC Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.086147 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.086191 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.086208 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.086226 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.086240 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.096216 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/3.log" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.101195 4874 scope.go:117] "RemoveContainer" containerID="82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb" Feb 17 16:04:31 crc kubenswrapper[4874]: E0217 16:04:31.101468 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\"" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.121499 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae\\\\n2026-02-17T16:03:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae to /host/opt/cni/bin/\\\\n2026-02-17T16:03:33Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:33Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.136359 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.149289 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.166745 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.183820 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.188423 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.188470 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.188485 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.188503 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.188516 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.201308 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.216967 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.237378 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.253403 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.272397 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.287063 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.291206 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.291249 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.291262 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.291280 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.291294 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.315132 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.332050 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.359630 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.375290 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.389490 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.393870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.393911 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.393922 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.393937 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.393948 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.405116 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.421240 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe15a814-ad3e-42e0-b991-3f30ed1ef47f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff60094f799686e9f12b1d1205a7ef68133f64cbb1c81000a87bc68ead3ab93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.448196 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 16:04:29.417284 6919 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-2vkxj in node crc\\\\nI0217 16:04:29.417291 6919 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-2vkxj after 0 failed attempt(s)\\\\nI0217 16:04:29.417305 6919 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-2vkxj\\\\nF0217 16:04:29.415954 6919 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.456362 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.456439 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.456440 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:31 crc kubenswrapper[4874]: E0217 16:04:31.456530 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.456543 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:31 crc kubenswrapper[4874]: E0217 16:04:31.456662 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:31 crc kubenswrapper[4874]: E0217 16:04:31.456752 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:31 crc kubenswrapper[4874]: E0217 16:04:31.456829 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.496841 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.496895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.496914 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.496938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.496956 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.600437 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.600483 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.600498 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.600518 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.600530 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.703646 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.703712 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.703737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.703766 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.703791 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.807503 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.807583 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.807605 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.807628 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.807649 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.911287 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.911390 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.911415 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.911444 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4874]: I0217 16:04:31.911468 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.013571 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.013600 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.013608 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.013620 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.013629 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.014851 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:46:07.721750196 +0000 UTC Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.116516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.116581 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.116598 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.116624 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.116640 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.219155 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.219246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.219267 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.219296 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.219317 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.322471 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.322528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.322545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.322568 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.322585 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.425178 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.425239 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.425257 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.425279 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.425297 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.527394 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.527450 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.527461 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.527476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.527486 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.629829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.629876 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.629889 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.629906 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.629919 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.733099 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.733141 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.733154 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.733172 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.733186 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.836213 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.836491 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.836502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.836519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.836528 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.939037 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.939143 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.939163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.939191 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4874]: I0217 16:04:32.939210 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.015876 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:43:20.589478472 +0000 UTC Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.042160 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.042195 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.042205 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.042219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.042229 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.145622 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.145681 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.145698 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.145725 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.145743 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.248655 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.248719 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.248742 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.248770 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.248792 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.351620 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.351683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.351704 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.351732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.351754 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.454315 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.454372 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.454390 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.454416 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.454434 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.456592 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.456679 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.456683 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:33 crc kubenswrapper[4874]: E0217 16:04:33.456753 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.456819 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:33 crc kubenswrapper[4874]: E0217 16:04:33.456879 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:33 crc kubenswrapper[4874]: E0217 16:04:33.457212 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:33 crc kubenswrapper[4874]: E0217 16:04:33.457284 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.556975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.557011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.557020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.557034 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.557043 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.660019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.660135 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.660159 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.660185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.660212 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.762549 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.762610 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.762626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.762651 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.762669 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.865867 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.865925 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.865942 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.865971 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.865993 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.969127 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.969231 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.969254 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.969281 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4874]: I0217 16:04:33.969300 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.016652 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:43:53.72831428 +0000 UTC Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.072688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.072745 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.072762 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.072788 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.072808 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.176111 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.176173 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.176192 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.176217 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.176235 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.278874 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.278934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.278952 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.278975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.278995 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.336604 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.336844 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.336805862 +0000 UTC m=+148.631194463 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.337407 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.337574 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.337657 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.337640292 +0000 UTC m=+148.632028883 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.337885 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.338177 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.338042 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.338614 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.338586075 +0000 UTC m=+148.632974666 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.338317 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.338958 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.339181 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.339421 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.339400275 +0000 UTC m=+148.633788866 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.382427 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.382485 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.382500 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.382520 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.382533 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.439385 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.440371 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.440684 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.441004 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:34 crc kubenswrapper[4874]: E0217 16:04:34.441263 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.441230519 +0000 UTC m=+148.735619110 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.485936 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.486363 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.486511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.486677 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.487125 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.589542 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.589931 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.590151 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.590308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.590444 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.693383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.693783 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.693934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.694136 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.694309 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.797220 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.797266 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.797282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.797304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.797322 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.899888 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.899961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.899985 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.900013 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4874]: I0217 16:04:34.900033 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.003051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.003300 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.003321 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.003344 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.003361 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.017582 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:40:52.860831096 +0000 UTC Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.106502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.106560 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.106573 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.106591 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.106607 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.209064 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.209136 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.209153 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.209175 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.209192 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.311599 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.311672 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.311695 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.311719 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.311738 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.415039 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.415133 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.415158 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.415182 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.415199 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.456351 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.456435 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.456470 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.456550 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:35 crc kubenswrapper[4874]: E0217 16:04:35.456547 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:35 crc kubenswrapper[4874]: E0217 16:04:35.456667 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:35 crc kubenswrapper[4874]: E0217 16:04:35.456718 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:35 crc kubenswrapper[4874]: E0217 16:04:35.456778 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.517538 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.517610 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.517623 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.517641 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.517655 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.620332 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.620395 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.620415 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.620440 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.620457 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.723600 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.723660 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.723677 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.723702 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.723722 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.826258 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.826317 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.826333 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.826357 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.826374 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.928617 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.928680 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.928703 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.928732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4874]: I0217 16:04:35.928755 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.017856 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:34:47.880738958 +0000 UTC Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.030739 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.030806 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.030827 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.030856 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.030876 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.133800 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.133851 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.133864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.133883 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.133895 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.237185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.237252 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.237263 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.237280 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.237291 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.340315 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.340377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.340395 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.340418 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.340434 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.443545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.443618 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.443632 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.443661 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.443677 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.546868 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.546947 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.546968 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.547002 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.547026 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.650191 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.650243 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.650260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.650308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.650328 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.753274 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.753342 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.753361 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.753394 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.753432 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.857013 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.857115 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.857135 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.857163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.857183 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.960574 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.960666 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.960687 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.960714 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4874]: I0217 16:04:36.960731 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.018419 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:32:14.390494929 +0000 UTC Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.063609 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.063707 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.063730 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.063793 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.063814 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.167174 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.167235 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.167287 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.167349 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.167369 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.270832 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.270892 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.270910 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.270935 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.270952 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.373165 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.373221 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.373240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.373264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.373281 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.456951 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.457054 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.457181 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.456985 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:37 crc kubenswrapper[4874]: E0217 16:04:37.457256 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:37 crc kubenswrapper[4874]: E0217 16:04:37.457480 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:37 crc kubenswrapper[4874]: E0217 16:04:37.457611 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:37 crc kubenswrapper[4874]: E0217 16:04:37.457699 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.476291 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.476368 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.476392 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.476424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.476448 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.579208 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.579267 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.579284 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.579308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.579325 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.682796 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.682864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.682888 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.682923 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.682945 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.785923 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.785979 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.785995 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.786020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.786038 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.889177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.889243 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.889260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.889285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.889305 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.993280 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.993353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.993373 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.993397 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4874]: I0217 16:04:37.993415 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.018776 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:38:44.438225344 +0000 UTC Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.096804 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.096868 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.096891 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.096919 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.096938 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.200259 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.200349 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.200372 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.200400 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.200422 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.303120 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.303185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.303204 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.303231 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.303250 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.401063 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.401143 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.401161 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.401188 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.401215 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: E0217 16:04:38.422109 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.427098 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.427157 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.427171 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.427189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.427202 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: E0217 16:04:38.443066 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.449344 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.449410 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.449432 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.449460 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.449484 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: E0217 16:04:38.471865 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.477913 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.477974 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.477999 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.478027 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.478049 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: E0217 16:04:38.497856 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.503512 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.503570 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.503587 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.503612 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.503635 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: E0217 16:04:38.525703 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6be8f3a4-e6e3-4cf0-93a0-9444be233e11\\\",\\\"systemUUID\\\":\\\"496eb863-febf-403f-bc40-ce30c0c4d225\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:38 crc kubenswrapper[4874]: E0217 16:04:38.525944 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.527992 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.528103 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.528158 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.528189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.528213 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.631299 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.631377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.631400 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.631430 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.631453 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.734620 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.734669 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.734685 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.734707 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.734727 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.837703 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.837763 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.837780 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.837805 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.837821 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.940584 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.940656 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.940673 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.940698 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4874]: I0217 16:04:38.940717 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.018984 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:08:18.243289053 +0000 UTC Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.043199 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.043267 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.043292 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.043320 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.043343 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.145631 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.145704 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.145725 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.145754 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.145773 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.248355 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.248457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.248479 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.248509 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.248579 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.351462 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.351530 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.351548 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.351570 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.351588 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.454266 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.454330 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.454348 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.454372 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.454389 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.456793 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.456856 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.456983 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.456996 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:39 crc kubenswrapper[4874]: E0217 16:04:39.457132 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:39 crc kubenswrapper[4874]: E0217 16:04:39.457335 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:39 crc kubenswrapper[4874]: E0217 16:04:39.457724 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:39 crc kubenswrapper[4874]: E0217 16:04:39.457877 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.557008 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.557064 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.557080 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.557137 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.557155 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.659960 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.660024 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.660040 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.660063 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.660086 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.763656 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.763750 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.763770 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.763800 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.763819 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.866644 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.866706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.866724 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.866749 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.866767 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.970323 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.970395 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.970412 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.970438 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4874]: I0217 16:04:39.970456 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.019281 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:23:28.628952934 +0000 UTC Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.073276 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.073336 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.073354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.073377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.073393 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.176330 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.176387 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.176404 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.176429 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.176446 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.279051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.279146 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.279175 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.279199 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.279216 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.382179 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.382246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.382264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.382294 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.382314 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.477793 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd07de3-ac86-4e94-81f9-983586d43e3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6205655ec01ba9fb036b13027c9e66ae06974a7e66e08d81e67e68fefed03782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afb9f42ee9f217e826bc9595c94485be1259956fab34c45407b8e977d0e516eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7033e40c0b3cd86bccbe24550c8aae3cd925ab3a7577be6d71921494dbb7093\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.485598 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.485653 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.485675 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.485764 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.485789 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.497760 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c66ff6e-e110-4154-8fdd-075e5c8c56a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2422e5203204daa5fcfbaa85ff563cacb69c9dd0a0dae355132cbe0fefe12a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e042242c2aeb7ad75c644052a8fd541837d1238dbd8919ffd9c9176d4d9deb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8c1ee2182853f3ba1a4fe6c34a8e6c301a004f857b7f95e0882679502f8b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef2698fe37dccdc4449581e1b96c78b39e9d839d84cc7df3def8c0595ace1e6c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.522256 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b56fa93-1e5d-4786-a935-dd3c1c945e91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:24Z\\\",\\\"message\\\":\\\"W0217 16:03:13.672722 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 16:03:13.673109 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771344193 cert, and key in /tmp/serving-cert-3777882028/serving-signer.crt, /tmp/serving-cert-3777882028/serving-signer.key\\\\nI0217 16:03:13.974653 1 observer_polling.go:159] Starting file observer\\\\nW0217 16:03:13.977062 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 16:03:13.977325 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:13.978495 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3777882028/tls.crt::/tmp/serving-cert-3777882028/tls.key\\\\\\\"\\\\nF0217 16:03:24.664050 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.545167 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1663fc7ac1e70a1b17c76b2a9f613520696b593126b2bb9cfdde5d68e431f511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.560778 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7xphw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54846556-797a-4e8d-ab51-aef5343b1fc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebb52d07a411f6900f9fef16186758fa5f9b44ed2c54944bf66220dd839b774d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtfv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7xphw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.588009 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.588061 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.588082 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.588129 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.588145 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.598041 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7649be38-9f50-4cce-9d16-e7100627eca5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ff3c28ab17a456eb6c403402a76fcc427fe799f05577f1c66a286392e67763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53112886a189833decee24627e6fd183022c19f454a873f86a38fecdd4505f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a37784fd80aacfdc6328736f02030af697430a34939d05550652921ce4dbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://154bdfc094853c9edfdd65f09ef6767ce1dd6441ee7322b3dba35d115460373b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://738bfe704918e39da4af889b57236556ee1f301bfa8327664b5d25601641d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://475a4e5dbce0485af01438974cae4db985e1b8a4ff4fa4f40ca98ba194fe36b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d2b0db5dafcf0400262c88857b98fbeb5f045c8806f5fadb21be7e4e47c21f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://749859c801a8c55616bd0f2a9adabddf47d82829bac79f98218c6cefbac59c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.622473 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.642855 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9eb703281270deafd626e5313a045812c23d81fc8adcab91226e656611483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e922b26792a90616c81196984c75e5245d1458b951ad5b4cd10d2db99b526bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.660555 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d87243-c32f-4eb1-9049-24409fc6ea39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c8c5dd3b54804a06909a56d1b152a702e8adbc38f14f0042b488c9529fc4eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bclhx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cccdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.680644 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.691448 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.691505 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.691526 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.691551 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.691584 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.698397 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382ecb629c2e95661e2315080b5313ce9572468316149e358ac1410f73774047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.722083 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hswwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bcec56b-03b2-401b-8a73-6d62f42ba22c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba16a1032820642fb3fc8c0a00654d5736d760d87ab6c9015d5649da3302cdca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3fd14ee04889f7c566d0be3ce6db2a52135af6967129e871beae3b0364dd63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac03ef0f88ddb006c984c3ddfeb346599ca5a9b60976e5b493d1b7f59d5055a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5346dedb6c4bf48a07a5032304c6e2f40978b277d97bef79476c450a10a14d61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2495891b2d6a88e01c7535c42df409f45c708d1a54f2ece2cb4e38cd660ce090\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://442fe26387ca88d52113188502f4837c5627e301b1cf984ad7b81bbc52f101b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc850543cd36db92b1c04e150cd468911753f4bdecb5ac06581d5d7fc205717b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnxcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hswwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.738383 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pm48m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672da34f-1e37-4e2c-b467-b5ee40c4a31b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktn2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pm48m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.754919 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe15a814-ad3e-42e0-b991-3f30ed1ef47f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff60094f799686e9f12b1d1205a7ef68133f64cbb1c81000a87bc68ead3ab93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b87e38c74da2293111d9768a94bd8916df751cf5817d8bc968ee6d50df071711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.783476 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a4777a-2390-401b-86b0-87d298e9f883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 16:04:29.417284 6919 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-2vkxj in node crc\\\\nI0217 16:04:29.417291 6919 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-2vkxj after 0 failed attempt(s)\\\\nI0217 16:04:29.417305 6919 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-2vkxj\\\\nF0217 16:04:29.415954 6919 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xrf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-65qcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.795432 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.795479 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.795496 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.795519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.796992 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.798777 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30e2d430-8c4b-4246-971e-6ba0ed8a0de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b2f66644c63a0fa78e3c25736c83b6acb0d652342dc68bd9045ccc8f0b102e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a0a815a73e3091b5d95ae0c335f655f1bc92c59537ea72d499f28c84f175c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kzt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dr22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.816896 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.834841 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-j77hc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17e6a08f-68c0-4b0a-a396-9dddcc726d37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ecf223725ef2b8a8bcb1d02065b33fbdbcd1bb76b10476594aac228e32321c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lbrrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-j77hc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.854130 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2vkxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aedd049-0029-44f7-869f-4a3ccdce8413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:18Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae\\\\n2026-02-17T16:03:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b66eb0c-fd4b-4053-aa41-3d6f6214c2ae to /host/opt/cni/bin/\\\\n2026-02-17T16:03:33Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:33Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2vkxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.900430 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.900462 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.900471 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.900484 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4874]: I0217 16:04:40.900494 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.002601 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.002655 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.002676 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.002700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.002717 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.019661 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:43:37.514740082 +0000 UTC Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.112027 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.112119 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.112145 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.112173 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.112192 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.214666 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.214716 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.214728 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.214745 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.214756 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.317180 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.317240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.317259 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.317283 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.317301 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.420131 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.420205 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.420222 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.420246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.420265 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.456581 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.456683 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:41 crc kubenswrapper[4874]: E0217 16:04:41.456789 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:41 crc kubenswrapper[4874]: E0217 16:04:41.456901 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.457004 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:41 crc kubenswrapper[4874]: E0217 16:04:41.457134 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.457205 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:41 crc kubenswrapper[4874]: E0217 16:04:41.457309 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.523775 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.523863 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.523883 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.523913 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.523933 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.627151 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.627219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.627237 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.627265 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.627286 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.730742 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.730793 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.730811 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.730834 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.730854 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.833736 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.834138 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.834401 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.834626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.834826 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.937907 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.937968 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.937985 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.938009 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4874]: I0217 16:04:41.938027 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.019901 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:56:58.988836586 +0000 UTC Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.040894 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.040943 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.040957 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.040975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.040988 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.142908 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.142979 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.142996 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.143019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.143037 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.246165 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.246246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.246264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.246291 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.246309 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.349674 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.349750 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.349773 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.349803 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.349827 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.453744 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.453801 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.453815 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.453836 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.453850 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.462181 4874 scope.go:117] "RemoveContainer" containerID="82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb" Feb 17 16:04:42 crc kubenswrapper[4874]: E0217 16:04:42.462431 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\"" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.556892 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.556937 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.556953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.556975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.556992 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.659862 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.659903 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.659914 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.659931 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.659945 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.762695 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.762756 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.762774 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.762797 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.762814 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.865049 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.865108 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.865119 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.865137 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.865149 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.969819 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.969886 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.969909 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.969939 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4874]: I0217 16:04:42.969960 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.020934 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:57:38.657942181 +0000 UTC Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.072719 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.072785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.072807 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.072838 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.072860 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.175602 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.175673 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.175696 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.175727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.175749 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.277990 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.278033 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.278046 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.278060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.278071 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.380643 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.380704 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.380721 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.380745 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.380762 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.456786 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.456924 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:43 crc kubenswrapper[4874]: E0217 16:04:43.456992 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.457020 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.457065 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:43 crc kubenswrapper[4874]: E0217 16:04:43.457198 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:43 crc kubenswrapper[4874]: E0217 16:04:43.457284 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:43 crc kubenswrapper[4874]: E0217 16:04:43.457388 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.483549 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.483987 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.484010 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.484039 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.484058 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.587539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.587589 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.587600 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.587620 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.587631 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.690841 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.690913 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.690930 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.690956 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.690974 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.794274 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.794340 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.794362 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.794387 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.794405 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.902774 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.902842 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.902858 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.902881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4874]: I0217 16:04:43.902898 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.006633 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.006698 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.006715 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.007139 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.007194 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.021137 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 06:46:04.179589636 +0000 UTC Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.110946 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.111040 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.111062 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.111132 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.111187 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.213859 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.213917 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.213934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.213959 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.213978 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.317643 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.317691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.317704 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.317726 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.317739 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.420612 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.420692 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.420706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.420728 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.420741 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.524282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.524354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.524377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.524403 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.524421 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.627403 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.627472 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.627495 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.627523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.627546 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.730664 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.730733 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.730757 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.730777 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.730789 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.834137 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.834202 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.834219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.834243 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.834260 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.937535 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.937601 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.937624 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.937649 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4874]: I0217 16:04:44.937665 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.021548 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:43:35.034404257 +0000 UTC Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.040167 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.040223 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.040240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.040266 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.040284 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.143870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.143943 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.143969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.143998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.144018 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.247578 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.247656 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.247677 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.247706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.247727 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.351266 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.351327 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.351347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.351373 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.351392 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.454599 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.454668 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.454685 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.454711 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.454728 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.456892 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.456983 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.457010 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.456921 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:45 crc kubenswrapper[4874]: E0217 16:04:45.457118 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:45 crc kubenswrapper[4874]: E0217 16:04:45.457305 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:45 crc kubenswrapper[4874]: E0217 16:04:45.457500 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:45 crc kubenswrapper[4874]: E0217 16:04:45.457607 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.558264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.558340 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.558363 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.558397 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.558420 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.661581 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.661650 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.661667 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.661691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.661712 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.764945 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.765203 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.765273 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.765300 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.765321 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.867607 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.867670 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.867688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.867717 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.867736 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.970133 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.970196 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.970218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.970246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4874]: I0217 16:04:45.970267 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.021780 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:10:27.784119606 +0000 UTC Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.072424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.072500 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.072522 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.072548 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.072565 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.175433 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.175498 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.175522 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.175554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.175579 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.278395 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.278471 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.278489 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.278514 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.278532 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.380825 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.380906 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.380924 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.380947 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.380968 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.489400 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.489474 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.489494 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.489523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.489545 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.591847 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.591912 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.591929 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.591952 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.591971 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.694791 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.694866 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.694879 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.695188 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.695225 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.797855 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.797925 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.797935 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.797947 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.797956 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.900413 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.900482 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.900492 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.900528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4874]: I0217 16:04:46.900540 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.002893 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.002954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.002972 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.002997 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.003021 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.022450 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:34:56.933176411 +0000 UTC Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.105675 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.105729 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.105746 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.105773 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.105796 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.209032 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.209124 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.209143 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.209170 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.209245 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.312387 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.312452 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.312474 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.312502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.312523 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.415346 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.415484 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.415498 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.415516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.415527 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.456694 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.456763 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.456800 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.456694 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:47 crc kubenswrapper[4874]: E0217 16:04:47.456908 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:47 crc kubenswrapper[4874]: E0217 16:04:47.457007 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:47 crc kubenswrapper[4874]: E0217 16:04:47.457176 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:47 crc kubenswrapper[4874]: E0217 16:04:47.457271 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.519006 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.519064 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.519108 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.519132 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.519150 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.622126 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.622192 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.622212 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.622239 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.622259 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.725853 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.725926 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.725942 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.725967 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.725987 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.828456 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.828511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.828529 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.828551 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.828567 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.931133 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.931169 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.931177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.931190 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4874]: I0217 16:04:47.931201 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.022830 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:38:36.238375588 +0000 UTC Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.034257 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.034312 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.034331 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.034354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.034370 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.137955 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.138013 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.138030 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.138053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.138073 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.240583 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.241203 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.241343 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.241490 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.241623 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.344984 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.345052 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.345113 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.345144 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.345166 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.448701 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.448761 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.448777 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.448802 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.448819 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.551765 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.551810 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.551823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.551840 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.551856 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.654953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.655316 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.655522 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.655691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.655809 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.693402 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.693494 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.693519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.693553 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.693584 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.766912 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s"] Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.767811 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.769582 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.770931 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.773226 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.773410 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.794285 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.794248513 podStartE2EDuration="1m15.794248513s" podCreationTimestamp="2026-02-17 16:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:48.793473824 +0000 UTC m=+99.087862415" watchObservedRunningTime="2026-02-17 16:04:48.794248513 +0000 UTC m=+99.088637114" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.818750 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.818716867 podStartE2EDuration="48.818716867s" podCreationTimestamp="2026-02-17 16:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:48.813289056 +0000 UTC m=+99.107677627" watchObservedRunningTime="2026-02-17 16:04:48.818716867 +0000 UTC m=+99.113105468" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.837891 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:48 crc kubenswrapper[4874]: E0217 16:04:48.838142 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:48 crc kubenswrapper[4874]: E0217 16:04:48.838273 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs podName:672da34f-1e37-4e2c-b467-b5ee40c4a31b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:52.838211441 +0000 UTC m=+163.132600002 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs") pod "network-metrics-daemon-pm48m" (UID: "672da34f-1e37-4e2c-b467-b5ee40c4a31b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.840892 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.840854025 podStartE2EDuration="1m18.840854025s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:48.837899964 +0000 UTC m=+99.132288565" watchObservedRunningTime="2026-02-17 16:04:48.840854025 +0000 UTC m=+99.135242586" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.936956 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hswwv" podStartSLOduration=78.93693307 podStartE2EDuration="1m18.93693307s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:48.921265969 +0000 UTC m=+99.215654590" watchObservedRunningTime="2026-02-17 16:04:48.93693307 +0000 UTC m=+99.231321651" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.937237 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7xphw" podStartSLOduration=78.937232097 podStartE2EDuration="1m18.937232097s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:48.936384257 +0000 UTC m=+99.230772818" watchObservedRunningTime="2026-02-17 16:04:48.937232097 +0000 UTC m=+99.231620658" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.938911 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.938951 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.938988 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.939037 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.939103 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:48 crc kubenswrapper[4874]: I0217 16:04:48.968665 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.96863166 podStartE2EDuration="1m18.96863166s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:48.965573386 +0000 UTC m=+99.259961967" watchObservedRunningTime="2026-02-17 16:04:48.96863166 +0000 UTC m=+99.263020231" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.023807 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:48:42.461236612 +0000 UTC Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.023916 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.034622 4874 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.035804 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podStartSLOduration=79.035774132 podStartE2EDuration="1m19.035774132s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:49.019344393 +0000 UTC m=+99.313732964" watchObservedRunningTime="2026-02-17 16:04:49.035774132 +0000 UTC m=+99.330162733" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.040350 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.040429 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.040491 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.040517 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.040540 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.040557 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.041067 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.042833 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.054728 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.064842 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34e8116b-bd01-4eb4-acb4-ab3aa22d57d7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-47c5s\" (UID: \"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.088871 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.103432 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.103406965 podStartE2EDuration="23.103406965s" podCreationTimestamp="2026-02-17 16:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:49.069963763 +0000 UTC m=+99.364352324" watchObservedRunningTime="2026-02-17 16:04:49.103406965 +0000 UTC m=+99.397795546" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.144182 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dr22" podStartSLOduration=78.144155485 podStartE2EDuration="1m18.144155485s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:49.125499752 +0000 UTC m=+99.419888353" watchObservedRunningTime="2026-02-17 16:04:49.144155485 +0000 UTC m=+99.438544086" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.161964 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" event={"ID":"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7","Type":"ContainerStarted","Data":"c41cdd3a5783ff3b6047d9d47adece964efd48fa4c6ec9b57607752030f7755d"} Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.174933 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-j77hc" podStartSLOduration=79.174892212 podStartE2EDuration="1m19.174892212s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:49.156928056 +0000 UTC m=+99.451316657" watchObservedRunningTime="2026-02-17 16:04:49.174892212 +0000 UTC m=+99.469280783" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.175754 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2vkxj" podStartSLOduration=79.175745503 podStartE2EDuration="1m19.175745503s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:49.17481379 +0000 UTC m=+99.469202381" watchObservedRunningTime="2026-02-17 16:04:49.175745503 +0000 UTC m=+99.470134104" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.457139 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.457750 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:49 crc kubenswrapper[4874]: E0217 16:04:49.457854 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.457952 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:49 crc kubenswrapper[4874]: E0217 16:04:49.458222 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:49 crc kubenswrapper[4874]: I0217 16:04:49.458178 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:49 crc kubenswrapper[4874]: E0217 16:04:49.458606 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:49 crc kubenswrapper[4874]: E0217 16:04:49.458760 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:50 crc kubenswrapper[4874]: I0217 16:04:50.167430 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" event={"ID":"34e8116b-bd01-4eb4-acb4-ab3aa22d57d7","Type":"ContainerStarted","Data":"c3bf5b36efa9bbd96ed9f4cd0d1e36b9b0e01b3a7475a6317e8a87dc9b78848f"} Feb 17 16:04:50 crc kubenswrapper[4874]: I0217 16:04:50.189107 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47c5s" podStartSLOduration=80.189060056 podStartE2EDuration="1m20.189060056s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:04:50.18758063 +0000 UTC m=+100.481969261" watchObservedRunningTime="2026-02-17 16:04:50.189060056 +0000 UTC m=+100.483448657" Feb 17 16:04:51 crc kubenswrapper[4874]: I0217 16:04:51.456800 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:51 crc kubenswrapper[4874]: I0217 16:04:51.456878 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:51 crc kubenswrapper[4874]: I0217 16:04:51.456895 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:51 crc kubenswrapper[4874]: E0217 16:04:51.457002 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:51 crc kubenswrapper[4874]: I0217 16:04:51.457055 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:51 crc kubenswrapper[4874]: E0217 16:04:51.457166 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:51 crc kubenswrapper[4874]: E0217 16:04:51.457282 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:51 crc kubenswrapper[4874]: E0217 16:04:51.457418 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:53 crc kubenswrapper[4874]: I0217 16:04:53.456469 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:53 crc kubenswrapper[4874]: I0217 16:04:53.456495 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:53 crc kubenswrapper[4874]: I0217 16:04:53.456490 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:53 crc kubenswrapper[4874]: I0217 16:04:53.456661 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:53 crc kubenswrapper[4874]: E0217 16:04:53.456870 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:53 crc kubenswrapper[4874]: E0217 16:04:53.457015 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:53 crc kubenswrapper[4874]: E0217 16:04:53.457183 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:53 crc kubenswrapper[4874]: E0217 16:04:53.457295 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:55 crc kubenswrapper[4874]: I0217 16:04:55.457292 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:55 crc kubenswrapper[4874]: I0217 16:04:55.457355 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:55 crc kubenswrapper[4874]: E0217 16:04:55.457462 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:55 crc kubenswrapper[4874]: E0217 16:04:55.457619 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:55 crc kubenswrapper[4874]: I0217 16:04:55.457761 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:55 crc kubenswrapper[4874]: E0217 16:04:55.457869 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:55 crc kubenswrapper[4874]: I0217 16:04:55.457942 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:55 crc kubenswrapper[4874]: E0217 16:04:55.458032 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:57 crc kubenswrapper[4874]: I0217 16:04:57.456226 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:57 crc kubenswrapper[4874]: I0217 16:04:57.456442 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:57 crc kubenswrapper[4874]: I0217 16:04:57.456497 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:57 crc kubenswrapper[4874]: I0217 16:04:57.456488 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:57 crc kubenswrapper[4874]: E0217 16:04:57.456647 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:04:57 crc kubenswrapper[4874]: E0217 16:04:57.456763 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:57 crc kubenswrapper[4874]: E0217 16:04:57.456847 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:57 crc kubenswrapper[4874]: E0217 16:04:57.456929 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:57 crc kubenswrapper[4874]: I0217 16:04:57.458029 4874 scope.go:117] "RemoveContainer" containerID="82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb" Feb 17 16:04:57 crc kubenswrapper[4874]: E0217 16:04:57.458366 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-65qcw_openshift-ovn-kubernetes(10a4777a-2390-401b-86b0-87d298e9f883)\"" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" Feb 17 16:04:59 crc kubenswrapper[4874]: I0217 16:04:59.456793 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:59 crc kubenswrapper[4874]: I0217 16:04:59.456821 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:59 crc kubenswrapper[4874]: E0217 16:04:59.456969 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:59 crc kubenswrapper[4874]: I0217 16:04:59.457004 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:59 crc kubenswrapper[4874]: E0217 16:04:59.457152 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:59 crc kubenswrapper[4874]: E0217 16:04:59.457279 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:59 crc kubenswrapper[4874]: I0217 16:04:59.458233 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:04:59 crc kubenswrapper[4874]: E0217 16:04:59.458419 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:01 crc kubenswrapper[4874]: I0217 16:05:01.456967 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:01 crc kubenswrapper[4874]: I0217 16:05:01.457010 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:01 crc kubenswrapper[4874]: I0217 16:05:01.457170 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:01 crc kubenswrapper[4874]: E0217 16:05:01.457167 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:01 crc kubenswrapper[4874]: I0217 16:05:01.457223 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:01 crc kubenswrapper[4874]: E0217 16:05:01.457341 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:01 crc kubenswrapper[4874]: E0217 16:05:01.457437 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:01 crc kubenswrapper[4874]: E0217 16:05:01.457514 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:03 crc kubenswrapper[4874]: I0217 16:05:03.457067 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:03 crc kubenswrapper[4874]: E0217 16:05:03.457264 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:03 crc kubenswrapper[4874]: I0217 16:05:03.457545 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:03 crc kubenswrapper[4874]: I0217 16:05:03.457586 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:03 crc kubenswrapper[4874]: E0217 16:05:03.457666 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:03 crc kubenswrapper[4874]: E0217 16:05:03.457880 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:03 crc kubenswrapper[4874]: I0217 16:05:03.458165 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:03 crc kubenswrapper[4874]: E0217 16:05:03.458478 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.225937 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/1.log" Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.226693 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/0.log" Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.226747 4874 generic.go:334] "Generic (PLEG): container finished" podID="8aedd049-0029-44f7-869f-4a3ccdce8413" containerID="00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245" exitCode=1 Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.226791 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2vkxj" event={"ID":"8aedd049-0029-44f7-869f-4a3ccdce8413","Type":"ContainerDied","Data":"00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245"} Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.226879 4874 scope.go:117] "RemoveContainer" containerID="0e6fa6b802619c32897c0f1e7e8f96ac7096c651c46684371e49ec5dcc51ec24" Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.227678 4874 scope.go:117] "RemoveContainer" containerID="00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245" Feb 17 16:05:05 crc kubenswrapper[4874]: E0217 16:05:05.227978 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2vkxj_openshift-multus(8aedd049-0029-44f7-869f-4a3ccdce8413)\"" pod="openshift-multus/multus-2vkxj" podUID="8aedd049-0029-44f7-869f-4a3ccdce8413" Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.456423 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.456475 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.456493 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:05 crc kubenswrapper[4874]: E0217 16:05:05.457145 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:05 crc kubenswrapper[4874]: E0217 16:05:05.456917 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:05 crc kubenswrapper[4874]: I0217 16:05:05.456498 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:05 crc kubenswrapper[4874]: E0217 16:05:05.457275 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:05 crc kubenswrapper[4874]: E0217 16:05:05.457463 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:06 crc kubenswrapper[4874]: I0217 16:05:06.231974 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/1.log" Feb 17 16:05:07 crc kubenswrapper[4874]: I0217 16:05:07.456311 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:07 crc kubenswrapper[4874]: I0217 16:05:07.456410 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:07 crc kubenswrapper[4874]: E0217 16:05:07.456514 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:07 crc kubenswrapper[4874]: I0217 16:05:07.456581 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:07 crc kubenswrapper[4874]: I0217 16:05:07.456612 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:07 crc kubenswrapper[4874]: E0217 16:05:07.456803 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:07 crc kubenswrapper[4874]: E0217 16:05:07.457286 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:07 crc kubenswrapper[4874]: E0217 16:05:07.457385 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:09 crc kubenswrapper[4874]: I0217 16:05:09.456618 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:09 crc kubenswrapper[4874]: I0217 16:05:09.456630 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:09 crc kubenswrapper[4874]: E0217 16:05:09.456750 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:09 crc kubenswrapper[4874]: I0217 16:05:09.456787 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:09 crc kubenswrapper[4874]: I0217 16:05:09.456831 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:09 crc kubenswrapper[4874]: E0217 16:05:09.456933 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:09 crc kubenswrapper[4874]: E0217 16:05:09.457164 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:09 crc kubenswrapper[4874]: E0217 16:05:09.457292 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:09 crc kubenswrapper[4874]: I0217 16:05:09.458768 4874 scope.go:117] "RemoveContainer" containerID="82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb" Feb 17 16:05:10 crc kubenswrapper[4874]: I0217 16:05:10.248964 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/3.log" Feb 17 16:05:10 crc kubenswrapper[4874]: I0217 16:05:10.252379 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerStarted","Data":"7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b"} Feb 17 16:05:10 crc kubenswrapper[4874]: I0217 16:05:10.252781 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:05:10 crc kubenswrapper[4874]: I0217 16:05:10.288287 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podStartSLOduration=100.288267188 podStartE2EDuration="1m40.288267188s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:10.287486189 +0000 UTC m=+120.581874780" watchObservedRunningTime="2026-02-17 16:05:10.288267188 +0000 UTC m=+120.582655759" Feb 17 16:05:10 crc kubenswrapper[4874]: I0217 16:05:10.375027 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pm48m"] Feb 17 16:05:10 crc kubenswrapper[4874]: I0217 16:05:10.375186 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:10 crc kubenswrapper[4874]: E0217 16:05:10.375310 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:10 crc kubenswrapper[4874]: E0217 16:05:10.415659 4874 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 16:05:10 crc kubenswrapper[4874]: E0217 16:05:10.581722 4874 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:11 crc kubenswrapper[4874]: I0217 16:05:11.456734 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:11 crc kubenswrapper[4874]: I0217 16:05:11.456812 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:11 crc kubenswrapper[4874]: E0217 16:05:11.456908 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:11 crc kubenswrapper[4874]: I0217 16:05:11.457047 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:11 crc kubenswrapper[4874]: E0217 16:05:11.457320 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:11 crc kubenswrapper[4874]: E0217 16:05:11.457395 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:12 crc kubenswrapper[4874]: I0217 16:05:12.457072 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:12 crc kubenswrapper[4874]: E0217 16:05:12.457315 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:13 crc kubenswrapper[4874]: I0217 16:05:13.456897 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:13 crc kubenswrapper[4874]: I0217 16:05:13.456912 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:13 crc kubenswrapper[4874]: E0217 16:05:13.457455 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:13 crc kubenswrapper[4874]: I0217 16:05:13.456956 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:13 crc kubenswrapper[4874]: E0217 16:05:13.457687 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:13 crc kubenswrapper[4874]: E0217 16:05:13.457780 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:14 crc kubenswrapper[4874]: I0217 16:05:14.456973 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:14 crc kubenswrapper[4874]: E0217 16:05:14.457235 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:15 crc kubenswrapper[4874]: I0217 16:05:15.456206 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:15 crc kubenswrapper[4874]: I0217 16:05:15.456283 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:15 crc kubenswrapper[4874]: I0217 16:05:15.456215 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:15 crc kubenswrapper[4874]: E0217 16:05:15.456469 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:15 crc kubenswrapper[4874]: E0217 16:05:15.456561 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:15 crc kubenswrapper[4874]: E0217 16:05:15.456745 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:15 crc kubenswrapper[4874]: E0217 16:05:15.583427 4874 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:16 crc kubenswrapper[4874]: I0217 16:05:16.456683 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:16 crc kubenswrapper[4874]: E0217 16:05:16.456886 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:17 crc kubenswrapper[4874]: I0217 16:05:17.456579 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:17 crc kubenswrapper[4874]: I0217 16:05:17.456696 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:17 crc kubenswrapper[4874]: I0217 16:05:17.456752 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:17 crc kubenswrapper[4874]: E0217 16:05:17.456876 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:17 crc kubenswrapper[4874]: E0217 16:05:17.456957 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:17 crc kubenswrapper[4874]: E0217 16:05:17.457302 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:17 crc kubenswrapper[4874]: I0217 16:05:17.457364 4874 scope.go:117] "RemoveContainer" containerID="00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245" Feb 17 16:05:18 crc kubenswrapper[4874]: I0217 16:05:18.284589 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/1.log" Feb 17 16:05:18 crc kubenswrapper[4874]: I0217 16:05:18.284661 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2vkxj" event={"ID":"8aedd049-0029-44f7-869f-4a3ccdce8413","Type":"ContainerStarted","Data":"c3c53358c743c2d2894de0ef19b65fc8e4216837246a7006087759154d090046"} Feb 17 16:05:18 crc kubenswrapper[4874]: I0217 16:05:18.456558 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:18 crc kubenswrapper[4874]: E0217 16:05:18.456792 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:19 crc kubenswrapper[4874]: I0217 16:05:19.196358 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:05:19 crc kubenswrapper[4874]: I0217 16:05:19.456407 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:19 crc kubenswrapper[4874]: I0217 16:05:19.456465 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:19 crc kubenswrapper[4874]: I0217 16:05:19.456412 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:19 crc kubenswrapper[4874]: E0217 16:05:19.456634 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:19 crc kubenswrapper[4874]: E0217 16:05:19.456760 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:19 crc kubenswrapper[4874]: E0217 16:05:19.456967 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:20 crc kubenswrapper[4874]: I0217 16:05:20.456373 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:20 crc kubenswrapper[4874]: E0217 16:05:20.459127 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pm48m" podUID="672da34f-1e37-4e2c-b467-b5ee40c4a31b" Feb 17 16:05:21 crc kubenswrapper[4874]: I0217 16:05:21.456179 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:21 crc kubenswrapper[4874]: I0217 16:05:21.456246 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:21 crc kubenswrapper[4874]: I0217 16:05:21.456251 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:21 crc kubenswrapper[4874]: I0217 16:05:21.458962 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 16:05:21 crc kubenswrapper[4874]: I0217 16:05:21.459183 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 16:05:21 crc kubenswrapper[4874]: I0217 16:05:21.459600 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 16:05:21 crc kubenswrapper[4874]: I0217 16:05:21.461176 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 16:05:22 crc kubenswrapper[4874]: I0217 16:05:22.456815 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:22 crc kubenswrapper[4874]: I0217 16:05:22.459768 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 16:05:22 crc kubenswrapper[4874]: I0217 16:05:22.460031 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.409447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.470918 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.471640 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.473250 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.474047 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.475154 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v9tn7"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.475880 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.476973 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fchf8"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.477673 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fchf8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.478287 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.479797 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.480747 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.481953 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.482767 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.497332 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cw7tb"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.498169 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.505655 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-s464s"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.506613 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.508256 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.509124 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.577775 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.577978 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.578679 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.579187 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.580306 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.580514 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.580705 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.582305 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.582593 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585229 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6wpw5"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585362 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585440 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585659 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585747 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585800 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585831 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585866 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585910 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585924 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585931 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585943 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586012 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586055 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586098 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586106 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586125 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586150 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586196 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586223 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586230 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586259 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586310 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586374 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586389 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586443 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586458 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586464 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586506 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585750 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586552 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586312 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586598 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586643 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.585750 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586728 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586815 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586824 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586904 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586918 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586935 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586998 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.587067 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.587069 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.587139 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.587156 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.586905 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.587217 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.587336 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.587449 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.589637 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2gktq"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.590117 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rxw56"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.590468 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.590787 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.593008 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.595514 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.595699 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.604244 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.604258 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.609280 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.609594 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7mw6t"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.609879 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610018 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610112 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610181 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610251 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610267 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610307 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610590 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610771 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610822 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610878 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610897 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610907 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610975 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.610936 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.611112 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2kf8w"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.611245 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.611416 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.611530 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.611586 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.629215 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.632113 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.632527 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.633526 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.636194 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.637358 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.637662 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.638551 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.649884 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.650086 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.650437 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651436 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-etcd-serving-ca\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651468 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-serving-cert\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651487 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz9q8\" (UniqueName: \"kubernetes.io/projected/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-kube-api-access-gz9q8\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651508 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2bc1be-9874-4d6c-b887-4a658d99a909-serving-cert\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651523 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2bc1be-9874-4d6c-b887-4a658d99a909-config\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651537 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w599d\" (UniqueName: \"kubernetes.io/projected/5c2bc1be-9874-4d6c-b887-4a658d99a909-kube-api-access-w599d\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651554 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5c2bc1be-9874-4d6c-b887-4a658d99a909-etcd-ca\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651568 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-client-ca\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651587 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-config\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651605 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2bc1be-9874-4d6c-b887-4a658d99a909-etcd-service-ca\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651633 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprf6\" (UniqueName: \"kubernetes.io/projected/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-kube-api-access-hprf6\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651647 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-serving-cert\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651664 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-audit-dir\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651679 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-audit-dir\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651696 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-audit\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651710 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9790481-730e-4e06-a338-bd615b4039e2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651752 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-config\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651771 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9790481-730e-4e06-a338-bd615b4039e2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651786 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be182c78-fa2c-49ab-9ec4-698854f3ca51-serving-cert\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651802 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651818 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48x5t\" (UniqueName: \"kubernetes.io/projected/e4493714-3270-4b3b-8b07-3d9faa92b110-kube-api-access-48x5t\") pod \"cluster-samples-operator-665b6dd947-rxt2d\" (UID: \"e4493714-3270-4b3b-8b07-3d9faa92b110\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651835 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-encryption-config\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651856 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5z7\" (UniqueName: \"kubernetes.io/projected/be182c78-fa2c-49ab-9ec4-698854f3ca51-kube-api-access-tt5z7\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651872 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-auth-proxy-config\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651879 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651888 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4493714-3270-4b3b-8b07-3d9faa92b110-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rxt2d\" (UID: \"e4493714-3270-4b3b-8b07-3d9faa92b110\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651902 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651921 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-machine-approver-tls\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651944 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c2bc1be-9874-4d6c-b887-4a658d99a909-etcd-client\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651962 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmvd\" (UniqueName: \"kubernetes.io/projected/d9790481-730e-4e06-a338-bd615b4039e2-kube-api-access-tjmvd\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.651978 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-config\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652000 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mqp\" (UniqueName: \"kubernetes.io/projected/43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81-kube-api-access-n6mqp\") pod \"downloads-7954f5f757-fchf8\" (UID: \"43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81\") " pod="openshift-console/downloads-7954f5f757-fchf8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652021 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-image-import-ca\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652037 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa024ca-53bd-4aeb-a216-26ed6044cf24-serving-cert\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652051 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-etcd-client\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652068 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-encryption-config\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652098 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-node-pullsecrets\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652111 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-audit-policies\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652135 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqrl\" (UniqueName: \"kubernetes.io/projected/9fa024ca-53bd-4aeb-a216-26ed6044cf24-kube-api-access-wtqrl\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652150 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9790481-730e-4e06-a338-bd615b4039e2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652164 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652179 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-etcd-client\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652193 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-client-ca\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652206 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfbxx\" (UniqueName: \"kubernetes.io/projected/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-kube-api-access-pfbxx\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652222 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-config\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652237 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652054 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.652841 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.653252 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.653484 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.653857 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.653875 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.654103 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.654109 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.654212 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.654337 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.654374 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.654394 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.654432 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.654435 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.654954 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.655161 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.655713 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.655864 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.656057 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.656326 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.657504 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.657688 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.658370 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.658863 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.660418 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ggrcz"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.660833 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.661228 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.661702 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.661794 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.661942 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.662549 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.663042 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.663414 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.663921 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.664781 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.666625 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.666947 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.669366 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.673060 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.674392 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.675274 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.678020 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l5nms"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.678770 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.679994 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-64tmb"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.680899 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.682266 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.682594 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.682645 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.693691 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pmtgc"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.693929 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.694645 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.695520 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.698886 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.704252 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v9tn7"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.704291 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.704415 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.704758 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2w9mt"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.705058 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.705093 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fchf8"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.705168 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.705333 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.705445 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k7w57"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.705860 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.706692 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.707131 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.707945 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-s464s"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.709099 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.710740 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.711253 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.711320 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.713344 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.716888 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pm6wc"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.718760 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cw7tb"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.718847 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pm6wc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.719308 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.720815 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rxw56"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.724190 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2gktq"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.730149 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.732504 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.737323 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6wpw5"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.737356 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7mw6t"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.737580 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.737639 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.738619 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2kf8w"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.740161 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jt8g9"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.740906 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.740997 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.742208 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ggrcz"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.743231 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.744195 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.745248 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l5nms"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.746285 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-64tmb"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.748039 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2w9mt"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.751555 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.752945 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fceb62-f510-491f-a04c-0a2efd5439f7-config\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.752976 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb25d\" (UniqueName: \"kubernetes.io/projected/5a72263e-c92a-4d11-9751-aa4240676a0e-kube-api-access-zb25d\") pod \"openshift-apiserver-operator-796bbdcf4f-vn6v2\" (UID: \"5a72263e-c92a-4d11-9751-aa4240676a0e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.752996 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59309c0f-86d9-4425-8752-5e57fbbf9827-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dpvgb\" (UID: \"59309c0f-86d9-4425-8752-5e57fbbf9827\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753084 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-auth-proxy-config\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753185 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4493714-3270-4b3b-8b07-3d9faa92b110-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rxt2d\" (UID: \"e4493714-3270-4b3b-8b07-3d9faa92b110\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753281 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt5z7\" (UniqueName: \"kubernetes.io/projected/be182c78-fa2c-49ab-9ec4-698854f3ca51-kube-api-access-tt5z7\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753301 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/435efe4c-197a-43a2-9033-8dc57e98c006-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753459 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753490 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-console-config\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753568 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea652a36-9ddd-4c88-8e96-1f66c3ef0edf-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-stl9h\" (UID: \"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753616 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5097339d-dd80-4346-940d-097455cd8579-trusted-ca\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753637 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-serving-cert\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753664 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c2bc1be-9874-4d6c-b887-4a658d99a909-etcd-client\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753686 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-machine-approver-tls\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753706 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59309c0f-86d9-4425-8752-5e57fbbf9827-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dpvgb\" (UID: \"59309c0f-86d9-4425-8752-5e57fbbf9827\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753729 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5097339d-dd80-4346-940d-097455cd8579-serving-cert\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753752 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16120c4a-9a38-4d39-b5ed-784978d4521f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jbd4d\" (UID: \"16120c4a-9a38-4d39-b5ed-784978d4521f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753768 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-trusted-ca-bundle\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753782 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cghk\" (UniqueName: \"kubernetes.io/projected/cfccd2a3-037d-4b17-a269-952847ad533a-kube-api-access-7cghk\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753800 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjmvd\" (UniqueName: \"kubernetes.io/projected/d9790481-730e-4e06-a338-bd615b4039e2-kube-api-access-tjmvd\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753815 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-config\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753839 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6mqp\" (UniqueName: \"kubernetes.io/projected/43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81-kube-api-access-n6mqp\") pod \"downloads-7954f5f757-fchf8\" (UID: \"43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81\") " pod="openshift-console/downloads-7954f5f757-fchf8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753856 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-dir\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753872 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-image-import-ca\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753896 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa024ca-53bd-4aeb-a216-26ed6044cf24-serving-cert\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753920 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-etcd-client\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753935 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753951 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-oauth-config\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753952 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-auth-proxy-config\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753973 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/435efe4c-197a-43a2-9033-8dc57e98c006-proxy-tls\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.753990 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-oauth-serving-cert\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754010 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-encryption-config\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754033 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-node-pullsecrets\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754048 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-audit-policies\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754064 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/70fceb62-f510-491f-a04c-0a2efd5439f7-images\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754095 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754111 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/58504546-67ad-4e0d-88ea-53fcf0684659-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pptdb\" (UID: \"58504546-67ad-4e0d-88ea-53fcf0684659\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754127 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtqrl\" (UniqueName: \"kubernetes.io/projected/9fa024ca-53bd-4aeb-a216-26ed6044cf24-kube-api-access-wtqrl\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754145 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/70fceb62-f510-491f-a04c-0a2efd5439f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754162 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9790481-730e-4e06-a338-bd615b4039e2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754179 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754194 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea652a36-9ddd-4c88-8e96-1f66c3ef0edf-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-stl9h\" (UID: \"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754208 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6kmh\" (UniqueName: \"kubernetes.io/projected/5097339d-dd80-4346-940d-097455cd8579-kube-api-access-b6kmh\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754219 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754222 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-service-ca\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754370 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-etcd-client\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754390 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-client-ca\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754409 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfbxx\" (UniqueName: \"kubernetes.io/projected/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-kube-api-access-pfbxx\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754429 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5863402f-d384-4df7-96b5-a3ae67599f4c-config\") pod \"kube-controller-manager-operator-78b949d7b-84mb2\" (UID: \"5863402f-d384-4df7-96b5-a3ae67599f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754448 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-config\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754464 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754480 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754500 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb99r\" (UniqueName: \"kubernetes.io/projected/59309c0f-86d9-4425-8752-5e57fbbf9827-kube-api-access-jb99r\") pod \"openshift-controller-manager-operator-756b6f6bc6-dpvgb\" (UID: \"59309c0f-86d9-4425-8752-5e57fbbf9827\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754521 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639cdaa5-0dc8-4709-80c7-37d8c71e6eda-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hkn4z\" (UID: \"639cdaa5-0dc8-4709-80c7-37d8c71e6eda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754538 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rjq\" (UniqueName: \"kubernetes.io/projected/f81c1252-cad4-4b23-8b84-c5385c96641c-kube-api-access-n6rjq\") pod \"dns-operator-744455d44c-2gktq\" (UID: \"f81c1252-cad4-4b23-8b84-c5385c96641c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754556 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2bc1be-9874-4d6c-b887-4a658d99a909-serving-cert\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754571 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2bc1be-9874-4d6c-b887-4a658d99a909-config\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754586 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-etcd-serving-ca\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754601 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-serving-cert\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754619 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz9q8\" (UniqueName: \"kubernetes.io/projected/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-kube-api-access-gz9q8\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754636 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w599d\" (UniqueName: \"kubernetes.io/projected/5c2bc1be-9874-4d6c-b887-4a658d99a909-kube-api-access-w599d\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754653 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfpkg\" (UniqueName: \"kubernetes.io/projected/45e5b8ae-4eef-4449-b844-574c3b737ad4-kube-api-access-mfpkg\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754667 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5c2bc1be-9874-4d6c-b887-4a658d99a909-etcd-ca\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754683 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-client-ca\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754699 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a72263e-c92a-4d11-9751-aa4240676a0e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vn6v2\" (UID: \"5a72263e-c92a-4d11-9751-aa4240676a0e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754716 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16120c4a-9a38-4d39-b5ed-784978d4521f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jbd4d\" (UID: \"16120c4a-9a38-4d39-b5ed-784978d4521f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754734 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-config\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754750 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5863402f-d384-4df7-96b5-a3ae67599f4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84mb2\" (UID: \"5863402f-d384-4df7-96b5-a3ae67599f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754768 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2bc1be-9874-4d6c-b887-4a658d99a909-etcd-service-ca\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754782 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/639cdaa5-0dc8-4709-80c7-37d8c71e6eda-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hkn4z\" (UID: \"639cdaa5-0dc8-4709-80c7-37d8c71e6eda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754798 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/639cdaa5-0dc8-4709-80c7-37d8c71e6eda-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hkn4z\" (UID: \"639cdaa5-0dc8-4709-80c7-37d8c71e6eda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754818 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8wr\" (UniqueName: \"kubernetes.io/projected/ea652a36-9ddd-4c88-8e96-1f66c3ef0edf-kube-api-access-fn8wr\") pod \"kube-storage-version-migrator-operator-b67b599dd-stl9h\" (UID: \"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754834 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-policies\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754850 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754866 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8b7t\" (UniqueName: \"kubernetes.io/projected/58504546-67ad-4e0d-88ea-53fcf0684659-kube-api-access-s8b7t\") pod \"machine-config-controller-84d6567774-pptdb\" (UID: \"58504546-67ad-4e0d-88ea-53fcf0684659\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754884 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-serving-cert\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754900 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/435efe4c-197a-43a2-9033-8dc57e98c006-images\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754915 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16120c4a-9a38-4d39-b5ed-784978d4521f-config\") pod \"kube-apiserver-operator-766d6c64bb-jbd4d\" (UID: \"16120c4a-9a38-4d39-b5ed-784978d4521f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754941 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hprf6\" (UniqueName: \"kubernetes.io/projected/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-kube-api-access-hprf6\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754958 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-audit-dir\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754972 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-audit-dir\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.754989 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-audit\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755005 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f81c1252-cad4-4b23-8b84-c5385c96641c-metrics-tls\") pod \"dns-operator-744455d44c-2gktq\" (UID: \"f81c1252-cad4-4b23-8b84-c5385c96641c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755022 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5lk\" (UniqueName: \"kubernetes.io/projected/70fceb62-f510-491f-a04c-0a2efd5439f7-kube-api-access-cs5lk\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755037 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755055 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5863402f-d384-4df7-96b5-a3ae67599f4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84mb2\" (UID: \"5863402f-d384-4df7-96b5-a3ae67599f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755090 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9790481-730e-4e06-a338-bd615b4039e2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755107 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-config\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755123 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9790481-730e-4e06-a338-bd615b4039e2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755139 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/58504546-67ad-4e0d-88ea-53fcf0684659-proxy-tls\") pod \"machine-config-controller-84d6567774-pptdb\" (UID: \"58504546-67ad-4e0d-88ea-53fcf0684659\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755157 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755172 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be182c78-fa2c-49ab-9ec4-698854f3ca51-serving-cert\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755187 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgssd\" (UniqueName: \"kubernetes.io/projected/435efe4c-197a-43a2-9033-8dc57e98c006-kube-api-access-mgssd\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755203 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755220 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5097339d-dd80-4346-940d-097455cd8579-config\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755234 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a72263e-c92a-4d11-9751-aa4240676a0e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vn6v2\" (UID: \"5a72263e-c92a-4d11-9751-aa4240676a0e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755253 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48x5t\" (UniqueName: \"kubernetes.io/projected/e4493714-3270-4b3b-8b07-3d9faa92b110-kube-api-access-48x5t\") pod \"cluster-samples-operator-665b6dd947-rxt2d\" (UID: \"e4493714-3270-4b3b-8b07-3d9faa92b110\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755270 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755286 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755301 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755319 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-encryption-config\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755335 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755352 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.755471 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.756371 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-node-pullsecrets\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.757327 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-audit-dir\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.757363 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-audit-dir\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.758272 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-etcd-client\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.758327 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.758350 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.758924 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5c2bc1be-9874-4d6c-b887-4a658d99a909-etcd-ca\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.759540 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-config\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.759556 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-machine-approver-tls\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.759859 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-audit\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.759957 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-config\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.760294 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-client-ca\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.760495 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.760638 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.760783 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-audit-policies\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.760545 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2bc1be-9874-4d6c-b887-4a658d99a909-etcd-service-ca\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.761399 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa024ca-53bd-4aeb-a216-26ed6044cf24-serving-cert\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.761707 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.762158 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4493714-3270-4b3b-8b07-3d9faa92b110-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rxt2d\" (UID: \"e4493714-3270-4b3b-8b07-3d9faa92b110\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.762275 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-etcd-serving-ca\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.762323 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c2bc1be-9874-4d6c-b887-4a658d99a909-config\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.763100 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be182c78-fa2c-49ab-9ec4-698854f3ca51-serving-cert\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.763140 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.763515 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-encryption-config\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.763691 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-etcd-client\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.764237 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.764388 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-config\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.764380 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-image-import-ca\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.764415 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.765362 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-encryption-config\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.765658 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-config\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.765702 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.766169 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-client-ca\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.766827 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-serving-cert\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.766932 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9790481-730e-4e06-a338-bd615b4039e2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.767612 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-serving-cert\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.767661 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2bc1be-9874-4d6c-b887-4a658d99a909-serving-cert\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.769156 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k7w57"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.773467 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9790481-730e-4e06-a338-bd615b4039e2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.773537 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pm6wc"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.773941 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.774116 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c2bc1be-9874-4d6c-b887-4a658d99a909-etcd-client\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.775128 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.777337 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.779004 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jt8g9"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.782770 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.786466 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.788210 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-x5p5n"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.789112 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.789581 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n8bpc"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.790433 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.790960 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n8bpc"] Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.793388 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.813876 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.833794 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.853584 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856149 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856181 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-oauth-serving-cert\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856201 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/70fceb62-f510-491f-a04c-0a2efd5439f7-images\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856218 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856236 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856258 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d79a8a2-fadf-4c52-b67b-3091a20cace5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d7k8j\" (UID: \"1d79a8a2-fadf-4c52-b67b-3091a20cace5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856287 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-service-ca\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856302 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-metrics-tls\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856318 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856335 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856351 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb99r\" (UniqueName: \"kubernetes.io/projected/59309c0f-86d9-4425-8752-5e57fbbf9827-kube-api-access-jb99r\") pod \"openshift-controller-manager-operator-756b6f6bc6-dpvgb\" (UID: \"59309c0f-86d9-4425-8752-5e57fbbf9827\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856368 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dbdeec10-9456-46f3-a08b-6fe084f5865e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-64tmb\" (UID: \"dbdeec10-9456-46f3-a08b-6fe084f5865e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856383 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-key\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856402 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gskds\" (UniqueName: \"kubernetes.io/projected/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-kube-api-access-gskds\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856417 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlv4x\" (UniqueName: \"kubernetes.io/projected/1d79a8a2-fadf-4c52-b67b-3091a20cace5-kube-api-access-wlv4x\") pod \"package-server-manager-789f6589d5-d7k8j\" (UID: \"1d79a8a2-fadf-4c52-b67b-3091a20cace5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856436 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639cdaa5-0dc8-4709-80c7-37d8c71e6eda-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hkn4z\" (UID: \"639cdaa5-0dc8-4709-80c7-37d8c71e6eda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856458 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rjq\" (UniqueName: \"kubernetes.io/projected/f81c1252-cad4-4b23-8b84-c5385c96641c-kube-api-access-n6rjq\") pod \"dns-operator-744455d44c-2gktq\" (UID: \"f81c1252-cad4-4b23-8b84-c5385c96641c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856484 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmn8\" (UniqueName: \"kubernetes.io/projected/52e48cb6-3564-41f7-8030-f54482605065-kube-api-access-mjmn8\") pod \"control-plane-machine-set-operator-78cbb6b69f-mfmbh\" (UID: \"52e48cb6-3564-41f7-8030-f54482605065\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856508 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2a3365-4901-45b8-b528-0961dad4cf66-secret-volume\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856533 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfpkg\" (UniqueName: \"kubernetes.io/projected/45e5b8ae-4eef-4449-b844-574c3b737ad4-kube-api-access-mfpkg\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856550 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a72263e-c92a-4d11-9751-aa4240676a0e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vn6v2\" (UID: \"5a72263e-c92a-4d11-9751-aa4240676a0e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856565 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24203bde-9d97-4574-a15e-56bd86395bf4-apiservice-cert\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856582 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5863402f-d384-4df7-96b5-a3ae67599f4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84mb2\" (UID: \"5863402f-d384-4df7-96b5-a3ae67599f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856599 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65j8s\" (UniqueName: \"kubernetes.io/projected/ea9ddc77-8d24-4929-96c7-238e58e40bbe-kube-api-access-65j8s\") pod \"ingress-canary-pm6wc\" (UID: \"ea9ddc77-8d24-4929-96c7-238e58e40bbe\") " pod="openshift-ingress-canary/ingress-canary-pm6wc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856616 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/639cdaa5-0dc8-4709-80c7-37d8c71e6eda-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hkn4z\" (UID: \"639cdaa5-0dc8-4709-80c7-37d8c71e6eda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856662 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/639cdaa5-0dc8-4709-80c7-37d8c71e6eda-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hkn4z\" (UID: \"639cdaa5-0dc8-4709-80c7-37d8c71e6eda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856679 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8b7t\" (UniqueName: \"kubernetes.io/projected/58504546-67ad-4e0d-88ea-53fcf0684659-kube-api-access-s8b7t\") pod \"machine-config-controller-84d6567774-pptdb\" (UID: \"58504546-67ad-4e0d-88ea-53fcf0684659\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856695 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h49z\" (UniqueName: \"kubernetes.io/projected/ea2cbe06-9c98-4418-9122-a98dbae2460d-kube-api-access-4h49z\") pod \"catalog-operator-68c6474976-qnm5s\" (UID: \"ea2cbe06-9c98-4418-9122-a98dbae2460d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856709 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-cabundle\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856725 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-service-ca-bundle\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856741 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn8wr\" (UniqueName: \"kubernetes.io/projected/ea652a36-9ddd-4c88-8e96-1f66c3ef0edf-kube-api-access-fn8wr\") pod \"kube-storage-version-migrator-operator-b67b599dd-stl9h\" (UID: \"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856758 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-policies\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856788 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16120c4a-9a38-4d39-b5ed-784978d4521f-config\") pod \"kube-apiserver-operator-766d6c64bb-jbd4d\" (UID: \"16120c4a-9a38-4d39-b5ed-784978d4521f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856804 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f81c1252-cad4-4b23-8b84-c5385c96641c-metrics-tls\") pod \"dns-operator-744455d44c-2gktq\" (UID: \"f81c1252-cad4-4b23-8b84-c5385c96641c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856820 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7p7\" (UniqueName: \"kubernetes.io/projected/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-kube-api-access-bk7p7\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856836 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs5lk\" (UniqueName: \"kubernetes.io/projected/70fceb62-f510-491f-a04c-0a2efd5439f7-kube-api-access-cs5lk\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856852 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856868 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5863402f-d384-4df7-96b5-a3ae67599f4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84mb2\" (UID: \"5863402f-d384-4df7-96b5-a3ae67599f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856883 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea2cbe06-9c98-4418-9122-a98dbae2460d-srv-cert\") pod \"catalog-operator-68c6474976-qnm5s\" (UID: \"ea2cbe06-9c98-4418-9122-a98dbae2460d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856898 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38151ea5-4428-4a24-95ce-a02e586a83ce-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856917 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5097339d-dd80-4346-940d-097455cd8579-config\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856931 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfnp6\" (UniqueName: \"kubernetes.io/projected/3b2a3365-4901-45b8-b528-0961dad4cf66-kube-api-access-qfnp6\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856947 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856963 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/24203bde-9d97-4574-a15e-56bd86395bf4-tmpfs\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856984 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.856992 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/70fceb62-f510-491f-a04c-0a2efd5439f7-images\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857000 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24203bde-9d97-4574-a15e-56bd86395bf4-webhook-cert\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857014 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38151ea5-4428-4a24-95ce-a02e586a83ce-serving-cert\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857032 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59309c0f-86d9-4425-8752-5e57fbbf9827-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dpvgb\" (UID: \"59309c0f-86d9-4425-8752-5e57fbbf9827\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857051 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fceb62-f510-491f-a04c-0a2efd5439f7-config\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857065 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb25d\" (UniqueName: \"kubernetes.io/projected/5a72263e-c92a-4d11-9751-aa4240676a0e-kube-api-access-zb25d\") pod \"openshift-apiserver-operator-796bbdcf4f-vn6v2\" (UID: \"5a72263e-c92a-4d11-9751-aa4240676a0e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857098 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7pv\" (UniqueName: \"kubernetes.io/projected/24203bde-9d97-4574-a15e-56bd86395bf4-kube-api-access-pq7pv\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857114 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2a3365-4901-45b8-b528-0961dad4cf66-config-volume\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857135 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-console-config\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857151 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjhb8\" (UniqueName: \"kubernetes.io/projected/38151ea5-4428-4a24-95ce-a02e586a83ce-kube-api-access-pjhb8\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857167 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-metrics-certs\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857183 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea652a36-9ddd-4c88-8e96-1f66c3ef0edf-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-stl9h\" (UID: \"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857200 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5097339d-dd80-4346-940d-097455cd8579-trusted-ca\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857216 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-config\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857231 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16120c4a-9a38-4d39-b5ed-784978d4521f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jbd4d\" (UID: \"16120c4a-9a38-4d39-b5ed-784978d4521f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857246 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmjqp\" (UniqueName: \"kubernetes.io/projected/dbdeec10-9456-46f3-a08b-6fe084f5865e-kube-api-access-jmjqp\") pod \"multus-admission-controller-857f4d67dd-64tmb\" (UID: \"dbdeec10-9456-46f3-a08b-6fe084f5865e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857263 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cghk\" (UniqueName: \"kubernetes.io/projected/cfccd2a3-037d-4b17-a269-952847ad533a-kube-api-access-7cghk\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857289 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-dir\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857305 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6700af5a-0927-417d-a623-e5bf764df51b-serving-cert\") pod \"openshift-config-operator-7777fb866f-vxvz6\" (UID: \"6700af5a-0927-417d-a623-e5bf764df51b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857329 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/435efe4c-197a-43a2-9033-8dc57e98c006-proxy-tls\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857345 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxsj\" (UniqueName: \"kubernetes.io/projected/d985a553-61af-46e7-a559-16dd4629929c-kube-api-access-8bxsj\") pod \"migrator-59844c95c7-hpbgn\" (UID: \"d985a553-61af-46e7-a559-16dd4629929c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857366 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/58504546-67ad-4e0d-88ea-53fcf0684659-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pptdb\" (UID: \"58504546-67ad-4e0d-88ea-53fcf0684659\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857387 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/70fceb62-f510-491f-a04c-0a2efd5439f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857405 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea652a36-9ddd-4c88-8e96-1f66c3ef0edf-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-stl9h\" (UID: \"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857405 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-oauth-serving-cert\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857420 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6kmh\" (UniqueName: \"kubernetes.io/projected/5097339d-dd80-4346-940d-097455cd8579-kube-api-access-b6kmh\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857502 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5863402f-d384-4df7-96b5-a3ae67599f4c-config\") pod \"kube-controller-manager-operator-78b949d7b-84mb2\" (UID: \"5863402f-d384-4df7-96b5-a3ae67599f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857541 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-config-volume\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857592 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbx6\" (UniqueName: \"kubernetes.io/projected/4f63eb58-f30b-41f4-b569-a7906802fcb4-kube-api-access-ldbx6\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857636 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqg92\" (UniqueName: \"kubernetes.io/projected/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-kube-api-access-gqg92\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857667 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38151ea5-4428-4a24-95ce-a02e586a83ce-config\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857700 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea2cbe06-9c98-4418-9122-a98dbae2460d-profile-collector-cert\") pod \"catalog-operator-68c6474976-qnm5s\" (UID: \"ea2cbe06-9c98-4418-9122-a98dbae2460d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857735 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16120c4a-9a38-4d39-b5ed-784978d4521f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jbd4d\" (UID: \"16120c4a-9a38-4d39-b5ed-784978d4521f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857763 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a72263e-c92a-4d11-9751-aa4240676a0e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vn6v2\" (UID: \"5a72263e-c92a-4d11-9751-aa4240676a0e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857766 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-srv-cert\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857803 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/168d1b1d-27b6-4e4e-82b4-546836063edd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857823 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857839 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/435efe4c-197a-43a2-9033-8dc57e98c006-images\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857856 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmz2b\" (UniqueName: \"kubernetes.io/projected/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-kube-api-access-pmz2b\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857872 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38151ea5-4428-4a24-95ce-a02e586a83ce-service-ca-bundle\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857891 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9llt\" (UniqueName: \"kubernetes.io/projected/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-kube-api-access-m9llt\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857916 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-default-certificate\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857938 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/58504546-67ad-4e0d-88ea-53fcf0684659-proxy-tls\") pod \"machine-config-controller-84d6567774-pptdb\" (UID: \"58504546-67ad-4e0d-88ea-53fcf0684659\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857957 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgssd\" (UniqueName: \"kubernetes.io/projected/435efe4c-197a-43a2-9033-8dc57e98c006-kube-api-access-mgssd\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857973 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.857989 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a72263e-c92a-4d11-9751-aa4240676a0e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vn6v2\" (UID: \"5a72263e-c92a-4d11-9751-aa4240676a0e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.858007 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.858024 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.858040 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.858123 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcwqh\" (UniqueName: \"kubernetes.io/projected/168d1b1d-27b6-4e4e-82b4-546836063edd-kube-api-access-mcwqh\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.858668 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5863402f-d384-4df7-96b5-a3ae67599f4c-config\") pod \"kube-controller-manager-operator-78b949d7b-84mb2\" (UID: \"5863402f-d384-4df7-96b5-a3ae67599f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.858864 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-service-ca\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.859193 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fceb62-f510-491f-a04c-0a2efd5439f7-config\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860024 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860339 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-serving-cert\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860380 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-stats-auth\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860406 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/435efe4c-197a-43a2-9033-8dc57e98c006-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860445 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/168d1b1d-27b6-4e4e-82b4-546836063edd-trusted-ca\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860461 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/168d1b1d-27b6-4e4e-82b4-546836063edd-metrics-tls\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860481 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-serving-cert\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860499 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5097339d-dd80-4346-940d-097455cd8579-serving-cert\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860516 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-trusted-ca-bundle\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860533 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6700af5a-0927-417d-a623-e5bf764df51b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vxvz6\" (UID: \"6700af5a-0927-417d-a623-e5bf764df51b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860560 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59309c0f-86d9-4425-8752-5e57fbbf9827-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dpvgb\" (UID: \"59309c0f-86d9-4425-8752-5e57fbbf9827\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860577 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea9ddc77-8d24-4929-96c7-238e58e40bbe-cert\") pod \"ingress-canary-pm6wc\" (UID: \"ea9ddc77-8d24-4929-96c7-238e58e40bbe\") " pod="openshift-ingress-canary/ingress-canary-pm6wc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860593 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/52e48cb6-3564-41f7-8030-f54482605065-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mfmbh\" (UID: \"52e48cb6-3564-41f7-8030-f54482605065\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860611 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfcv6\" (UniqueName: \"kubernetes.io/projected/6700af5a-0927-417d-a623-e5bf764df51b-kube-api-access-pfcv6\") pod \"openshift-config-operator-7777fb866f-vxvz6\" (UID: \"6700af5a-0927-417d-a623-e5bf764df51b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860660 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860678 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-oauth-config\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860901 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.860956 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-console-config\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.861740 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-dir\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.861864 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea652a36-9ddd-4c88-8e96-1f66c3ef0edf-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-stl9h\" (UID: \"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.862515 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-policies\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.862752 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/435efe4c-197a-43a2-9033-8dc57e98c006-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.863517 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59309c0f-86d9-4425-8752-5e57fbbf9827-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dpvgb\" (UID: \"59309c0f-86d9-4425-8752-5e57fbbf9827\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.863987 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-trusted-ca-bundle\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.864166 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f81c1252-cad4-4b23-8b84-c5385c96641c-metrics-tls\") pod \"dns-operator-744455d44c-2gktq\" (UID: \"f81c1252-cad4-4b23-8b84-c5385c96641c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.864246 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-oauth-config\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.864377 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/58504546-67ad-4e0d-88ea-53fcf0684659-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pptdb\" (UID: \"58504546-67ad-4e0d-88ea-53fcf0684659\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.864881 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.865390 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5097339d-dd80-4346-940d-097455cd8579-config\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.865509 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5097339d-dd80-4346-940d-097455cd8579-trusted-ca\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.865971 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.866290 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.866383 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea652a36-9ddd-4c88-8e96-1f66c3ef0edf-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-stl9h\" (UID: \"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.866475 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/70fceb62-f510-491f-a04c-0a2efd5439f7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.866708 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59309c0f-86d9-4425-8752-5e57fbbf9827-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dpvgb\" (UID: \"59309c0f-86d9-4425-8752-5e57fbbf9827\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.866960 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-serving-cert\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.867339 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a72263e-c92a-4d11-9751-aa4240676a0e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vn6v2\" (UID: \"5a72263e-c92a-4d11-9751-aa4240676a0e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.867398 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5863402f-d384-4df7-96b5-a3ae67599f4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84mb2\" (UID: \"5863402f-d384-4df7-96b5-a3ae67599f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.868975 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.869254 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.869407 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.869701 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.869772 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/58504546-67ad-4e0d-88ea-53fcf0684659-proxy-tls\") pod \"machine-config-controller-84d6567774-pptdb\" (UID: \"58504546-67ad-4e0d-88ea-53fcf0684659\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.869995 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.870154 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5097339d-dd80-4346-940d-097455cd8579-serving-cert\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.874502 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.876643 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.893701 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.898205 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639cdaa5-0dc8-4709-80c7-37d8c71e6eda-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hkn4z\" (UID: \"639cdaa5-0dc8-4709-80c7-37d8c71e6eda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.914871 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.923897 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/639cdaa5-0dc8-4709-80c7-37d8c71e6eda-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hkn4z\" (UID: \"639cdaa5-0dc8-4709-80c7-37d8c71e6eda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.933727 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.954703 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.961323 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-srv-cert\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.961508 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/168d1b1d-27b6-4e4e-82b4-546836063edd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.961672 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmz2b\" (UniqueName: \"kubernetes.io/projected/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-kube-api-access-pmz2b\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.961802 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38151ea5-4428-4a24-95ce-a02e586a83ce-service-ca-bundle\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.961952 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9llt\" (UniqueName: \"kubernetes.io/projected/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-kube-api-access-m9llt\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.962092 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-default-certificate\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.962257 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcwqh\" (UniqueName: \"kubernetes.io/projected/168d1b1d-27b6-4e4e-82b4-546836063edd-kube-api-access-mcwqh\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.962544 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-serving-cert\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.962757 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-stats-auth\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.962970 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/168d1b1d-27b6-4e4e-82b4-546836063edd-trusted-ca\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.963265 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/168d1b1d-27b6-4e4e-82b4-546836063edd-metrics-tls\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.963970 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6700af5a-0927-417d-a623-e5bf764df51b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vxvz6\" (UID: \"6700af5a-0927-417d-a623-e5bf764df51b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.963475 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6700af5a-0927-417d-a623-e5bf764df51b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vxvz6\" (UID: \"6700af5a-0927-417d-a623-e5bf764df51b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.964259 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea9ddc77-8d24-4929-96c7-238e58e40bbe-cert\") pod \"ingress-canary-pm6wc\" (UID: \"ea9ddc77-8d24-4929-96c7-238e58e40bbe\") " pod="openshift-ingress-canary/ingress-canary-pm6wc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.964386 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/52e48cb6-3564-41f7-8030-f54482605065-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mfmbh\" (UID: \"52e48cb6-3564-41f7-8030-f54482605065\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.964535 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfcv6\" (UniqueName: \"kubernetes.io/projected/6700af5a-0927-417d-a623-e5bf764df51b-kube-api-access-pfcv6\") pod \"openshift-config-operator-7777fb866f-vxvz6\" (UID: \"6700af5a-0927-417d-a623-e5bf764df51b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.964706 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.964829 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d79a8a2-fadf-4c52-b67b-3091a20cace5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d7k8j\" (UID: \"1d79a8a2-fadf-4c52-b67b-3091a20cace5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.965035 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.965187 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-metrics-tls\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.965327 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.966238 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlv4x\" (UniqueName: \"kubernetes.io/projected/1d79a8a2-fadf-4c52-b67b-3091a20cace5-kube-api-access-wlv4x\") pod \"package-server-manager-789f6589d5-d7k8j\" (UID: \"1d79a8a2-fadf-4c52-b67b-3091a20cace5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.966529 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dbdeec10-9456-46f3-a08b-6fe084f5865e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-64tmb\" (UID: \"dbdeec10-9456-46f3-a08b-6fe084f5865e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.966716 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-key\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.966864 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gskds\" (UniqueName: \"kubernetes.io/projected/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-kube-api-access-gskds\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.967165 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2a3365-4901-45b8-b528-0961dad4cf66-secret-volume\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.967303 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmn8\" (UniqueName: \"kubernetes.io/projected/52e48cb6-3564-41f7-8030-f54482605065-kube-api-access-mjmn8\") pod \"control-plane-machine-set-operator-78cbb6b69f-mfmbh\" (UID: \"52e48cb6-3564-41f7-8030-f54482605065\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.967447 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24203bde-9d97-4574-a15e-56bd86395bf4-apiservice-cert\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.967583 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65j8s\" (UniqueName: \"kubernetes.io/projected/ea9ddc77-8d24-4929-96c7-238e58e40bbe-kube-api-access-65j8s\") pod \"ingress-canary-pm6wc\" (UID: \"ea9ddc77-8d24-4929-96c7-238e58e40bbe\") " pod="openshift-ingress-canary/ingress-canary-pm6wc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.967717 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-service-ca-bundle\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.967880 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h49z\" (UniqueName: \"kubernetes.io/projected/ea2cbe06-9c98-4418-9122-a98dbae2460d-kube-api-access-4h49z\") pod \"catalog-operator-68c6474976-qnm5s\" (UID: \"ea2cbe06-9c98-4418-9122-a98dbae2460d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.968123 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-cabundle\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.968360 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk7p7\" (UniqueName: \"kubernetes.io/projected/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-kube-api-access-bk7p7\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.968607 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea2cbe06-9c98-4418-9122-a98dbae2460d-srv-cert\") pod \"catalog-operator-68c6474976-qnm5s\" (UID: \"ea2cbe06-9c98-4418-9122-a98dbae2460d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.968856 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfnp6\" (UniqueName: \"kubernetes.io/projected/3b2a3365-4901-45b8-b528-0961dad4cf66-kube-api-access-qfnp6\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.969154 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38151ea5-4428-4a24-95ce-a02e586a83ce-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.969440 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/24203bde-9d97-4574-a15e-56bd86395bf4-tmpfs\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.969689 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38151ea5-4428-4a24-95ce-a02e586a83ce-serving-cert\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.969824 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24203bde-9d97-4574-a15e-56bd86395bf4-webhook-cert\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.970018 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/24203bde-9d97-4574-a15e-56bd86395bf4-tmpfs\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.970539 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7pv\" (UniqueName: \"kubernetes.io/projected/24203bde-9d97-4574-a15e-56bd86395bf4-kube-api-access-pq7pv\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.970810 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2a3365-4901-45b8-b528-0961dad4cf66-config-volume\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.970932 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjhb8\" (UniqueName: \"kubernetes.io/projected/38151ea5-4428-4a24-95ce-a02e586a83ce-kube-api-access-pjhb8\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.971049 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-config\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.971174 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-metrics-certs\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.971308 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmjqp\" (UniqueName: \"kubernetes.io/projected/dbdeec10-9456-46f3-a08b-6fe084f5865e-kube-api-access-jmjqp\") pod \"multus-admission-controller-857f4d67dd-64tmb\" (UID: \"dbdeec10-9456-46f3-a08b-6fe084f5865e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.971442 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6700af5a-0927-417d-a623-e5bf764df51b-serving-cert\") pod \"openshift-config-operator-7777fb866f-vxvz6\" (UID: \"6700af5a-0927-417d-a623-e5bf764df51b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.971566 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxsj\" (UniqueName: \"kubernetes.io/projected/d985a553-61af-46e7-a559-16dd4629929c-kube-api-access-8bxsj\") pod \"migrator-59844c95c7-hpbgn\" (UID: \"d985a553-61af-46e7-a559-16dd4629929c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.971706 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-config-volume\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.971808 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldbx6\" (UniqueName: \"kubernetes.io/projected/4f63eb58-f30b-41f4-b569-a7906802fcb4-kube-api-access-ldbx6\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.971913 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqg92\" (UniqueName: \"kubernetes.io/projected/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-kube-api-access-gqg92\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.972010 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38151ea5-4428-4a24-95ce-a02e586a83ce-config\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.972182 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea2cbe06-9c98-4418-9122-a98dbae2460d-profile-collector-cert\") pod \"catalog-operator-68c6474976-qnm5s\" (UID: \"ea2cbe06-9c98-4418-9122-a98dbae2460d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.973725 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.993569 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 16:05:29 crc kubenswrapper[4874]: I0217 16:05:29.999234 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16120c4a-9a38-4d39-b5ed-784978d4521f-config\") pod \"kube-apiserver-operator-766d6c64bb-jbd4d\" (UID: \"16120c4a-9a38-4d39-b5ed-784978d4521f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.013645 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.021919 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16120c4a-9a38-4d39-b5ed-784978d4521f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jbd4d\" (UID: \"16120c4a-9a38-4d39-b5ed-784978d4521f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.034051 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.042579 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/435efe4c-197a-43a2-9033-8dc57e98c006-images\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.066535 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.079278 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.087886 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/435efe4c-197a-43a2-9033-8dc57e98c006-proxy-tls\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.103991 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.111692 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38151ea5-4428-4a24-95ce-a02e586a83ce-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.114594 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.134520 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.154319 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.165748 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38151ea5-4428-4a24-95ce-a02e586a83ce-serving-cert\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.175471 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.195118 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.203710 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38151ea5-4428-4a24-95ce-a02e586a83ce-service-ca-bundle\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.214213 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.223451 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38151ea5-4428-4a24-95ce-a02e586a83ce-config\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.237689 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.275066 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.287030 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6700af5a-0927-417d-a623-e5bf764df51b-serving-cert\") pod \"openshift-config-operator-7777fb866f-vxvz6\" (UID: \"6700af5a-0927-417d-a623-e5bf764df51b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.293761 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.299625 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/52e48cb6-3564-41f7-8030-f54482605065-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mfmbh\" (UID: \"52e48cb6-3564-41f7-8030-f54482605065\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.314668 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.335068 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.355565 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.381557 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.394232 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.414041 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.434452 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.454911 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.474021 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.486257 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea2cbe06-9c98-4418-9122-a98dbae2460d-srv-cert\") pod \"catalog-operator-68c6474976-qnm5s\" (UID: \"ea2cbe06-9c98-4418-9122-a98dbae2460d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.494775 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.500345 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.505005 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2a3365-4901-45b8-b528-0961dad4cf66-secret-volume\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.507678 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea2cbe06-9c98-4418-9122-a98dbae2460d-profile-collector-cert\") pod \"catalog-operator-68c6474976-qnm5s\" (UID: \"ea2cbe06-9c98-4418-9122-a98dbae2460d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.514726 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.534813 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.554782 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.574127 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.592200 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/168d1b1d-27b6-4e4e-82b4-546836063edd-metrics-tls\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.609483 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.614590 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.615875 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/168d1b1d-27b6-4e4e-82b4-546836063edd-trusted-ca\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.635825 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.655071 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.674373 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.692666 4874 request.go:700] Waited for 1.01157804s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.695062 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.701741 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dbdeec10-9456-46f3-a08b-6fe084f5865e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-64tmb\" (UID: \"dbdeec10-9456-46f3-a08b-6fe084f5865e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.714325 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.734743 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.745286 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24203bde-9d97-4574-a15e-56bd86395bf4-apiservice-cert\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.746831 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24203bde-9d97-4574-a15e-56bd86395bf4-webhook-cert\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.754745 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.773342 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.787673 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-stats-auth\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.795484 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.805711 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-metrics-certs\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.813952 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.834407 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.839864 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-service-ca-bundle\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.855451 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.874483 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.888413 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-default-certificate\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.895424 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.910959 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d79a8a2-fadf-4c52-b67b-3091a20cace5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d7k8j\" (UID: \"1d79a8a2-fadf-4c52-b67b-3091a20cace5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.914609 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.920564 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.934671 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.954465 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.962169 4874 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.962265 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-srv-cert podName:8ee6ec56-3fff-4eb3-855a-5e597e4bbba3 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.462238994 +0000 UTC m=+141.756627595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-srv-cert") pod "olm-operator-6b444d44fb-hk8nz" (UID: "8ee6ec56-3fff-4eb3-855a-5e597e4bbba3") : failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.963438 4874 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.963643 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-serving-cert podName:13ed2de5-5f56-4d15-8ded-3e5bd15b511a nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.463614908 +0000 UTC m=+141.758003519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-serving-cert") pod "service-ca-operator-777779d784-8w5fg" (UID: "13ed2de5-5f56-4d15-8ded-3e5bd15b511a") : failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.965056 4874 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.965137 4874 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.965247 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-trusted-ca podName:6c21c3a4-9603-4cd0-a5e3-263aa51d678d nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.465184836 +0000 UTC m=+141.759573437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-trusted-ca") pod "marketplace-operator-79b997595-2w9mt" (UID: "6c21c3a4-9603-4cd0-a5e3-263aa51d678d") : failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.965278 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea9ddc77-8d24-4929-96c7-238e58e40bbe-cert podName:ea9ddc77-8d24-4929-96c7-238e58e40bbe nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.465264768 +0000 UTC m=+141.759653369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ea9ddc77-8d24-4929-96c7-238e58e40bbe-cert") pod "ingress-canary-pm6wc" (UID: "ea9ddc77-8d24-4929-96c7-238e58e40bbe") : failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.965398 4874 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.965553 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-metrics-tls podName:600c5b21-a46e-4644-8f1d-55fa0b4d06dd nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.465495223 +0000 UTC m=+141.759883814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-metrics-tls") pod "dns-default-jt8g9" (UID: "600c5b21-a46e-4644-8f1d-55fa0b4d06dd") : failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.967142 4874 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.967256 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-key podName:4f63eb58-f30b-41f4-b569-a7906802fcb4 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.467237546 +0000 UTC m=+141.761626147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-key") pod "service-ca-9c57cc56f-k7w57" (UID: "4f63eb58-f30b-41f4-b569-a7906802fcb4") : failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.968385 4874 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.968514 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-cabundle podName:4f63eb58-f30b-41f4-b569-a7906802fcb4 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.468483416 +0000 UTC m=+141.762872057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-cabundle") pod "service-ca-9c57cc56f-k7w57" (UID: "4f63eb58-f30b-41f4-b569-a7906802fcb4") : failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.971338 4874 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.971414 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-config podName:13ed2de5-5f56-4d15-8ded-3e5bd15b511a nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.471394447 +0000 UTC m=+141.765783038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-config") pod "service-ca-operator-777779d784-8w5fg" (UID: "13ed2de5-5f56-4d15-8ded-3e5bd15b511a") : failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.971438 4874 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.971535 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b2a3365-4901-45b8-b528-0961dad4cf66-config-volume podName:3b2a3365-4901-45b8-b528-0961dad4cf66 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.471509749 +0000 UTC m=+141.765898350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3b2a3365-4901-45b8-b528-0961dad4cf66-config-volume") pod "collect-profiles-29522400-b59b9" (UID: "3b2a3365-4901-45b8-b528-0961dad4cf66") : failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.972616 4874 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: E0217 16:05:30.972684 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-config-volume podName:600c5b21-a46e-4644-8f1d-55fa0b4d06dd nodeName:}" failed. No retries permitted until 2026-02-17 16:05:31.472667908 +0000 UTC m=+141.767056499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-config-volume") pod "dns-default-jt8g9" (UID: "600c5b21-a46e-4644-8f1d-55fa0b4d06dd") : failed to sync configmap cache: timed out waiting for the condition Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.992964 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 16:05:30 crc kubenswrapper[4874]: I0217 16:05:30.995779 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.014693 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.033919 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.054983 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.074163 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.093933 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.113986 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.134317 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.154889 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.174509 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.197605 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.214064 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.234131 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.254873 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.274011 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.295765 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.314491 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.334313 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.355346 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.373725 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.395060 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.441434 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt5z7\" (UniqueName: \"kubernetes.io/projected/be182c78-fa2c-49ab-9ec4-698854f3ca51-kube-api-access-tt5z7\") pod \"route-controller-manager-6576b87f9c-xjzv8\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.463830 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjmvd\" (UniqueName: \"kubernetes.io/projected/d9790481-730e-4e06-a338-bd615b4039e2-kube-api-access-tjmvd\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.486714 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfbxx\" (UniqueName: \"kubernetes.io/projected/9d6e7ed7-868d-4e75-9d22-7f38d441aadf-kube-api-access-pfbxx\") pod \"apiserver-7bbb656c7d-579zx\" (UID: \"9d6e7ed7-868d-4e75-9d22-7f38d441aadf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.502363 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6mqp\" (UniqueName: \"kubernetes.io/projected/43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81-kube-api-access-n6mqp\") pod \"downloads-7954f5f757-fchf8\" (UID: \"43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81\") " pod="openshift-console/downloads-7954f5f757-fchf8" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.509413 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-config-volume\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.509557 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-srv-cert\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.510257 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-config-volume\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.510416 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-serving-cert\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.510573 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea9ddc77-8d24-4929-96c7-238e58e40bbe-cert\") pod \"ingress-canary-pm6wc\" (UID: \"ea9ddc77-8d24-4929-96c7-238e58e40bbe\") " pod="openshift-ingress-canary/ingress-canary-pm6wc" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.510650 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.510703 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-metrics-tls\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.510762 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-key\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.511672 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-cabundle\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.511845 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2a3365-4901-45b8-b528-0961dad4cf66-config-volume\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.511917 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-config\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.512848 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.513169 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-cabundle\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.513273 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-config\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.514866 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-srv-cert\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.514888 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea9ddc77-8d24-4929-96c7-238e58e40bbe-cert\") pod \"ingress-canary-pm6wc\" (UID: \"ea9ddc77-8d24-4929-96c7-238e58e40bbe\") " pod="openshift-ingress-canary/ingress-canary-pm6wc" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.516104 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2a3365-4901-45b8-b528-0961dad4cf66-config-volume\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.516425 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-metrics-tls\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.518419 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-serving-cert\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.519117 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9790481-730e-4e06-a338-bd615b4039e2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4j5ws\" (UID: \"d9790481-730e-4e06-a338-bd615b4039e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.521153 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4f63eb58-f30b-41f4-b569-a7906802fcb4-signing-key\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.544210 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hprf6\" (UniqueName: \"kubernetes.io/projected/f6c5ab25-b40a-4e91-b4e6-811ec8093a2a-kube-api-access-hprf6\") pod \"apiserver-76f77b778f-s464s\" (UID: \"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a\") " pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.552415 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48x5t\" (UniqueName: \"kubernetes.io/projected/e4493714-3270-4b3b-8b07-3d9faa92b110-kube-api-access-48x5t\") pod \"cluster-samples-operator-665b6dd947-rxt2d\" (UID: \"e4493714-3270-4b3b-8b07-3d9faa92b110\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.582934 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtqrl\" (UniqueName: \"kubernetes.io/projected/9fa024ca-53bd-4aeb-a216-26ed6044cf24-kube-api-access-wtqrl\") pod \"controller-manager-879f6c89f-cw7tb\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.602634 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w599d\" (UniqueName: \"kubernetes.io/projected/5c2bc1be-9874-4d6c-b887-4a658d99a909-kube-api-access-w599d\") pod \"etcd-operator-b45778765-v9tn7\" (UID: \"5c2bc1be-9874-4d6c-b887-4a658d99a909\") " pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.611892 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz9q8\" (UniqueName: \"kubernetes.io/projected/d86c736d-5bee-4763-ac11-c9a2d4bce6d4-kube-api-access-gz9q8\") pod \"machine-approver-56656f9798-zrr5t\" (UID: \"d86c736d-5bee-4763-ac11-c9a2d4bce6d4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.613392 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.634913 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 16:05:31 crc kubenswrapper[4874]: W0217 16:05:31.635811 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd86c736d_5bee_4763_ac11_c9a2d4bce6d4.slice/crio-eac5eac3653c12bbaad9188fbf0f8cd5684196edb4288f5fe7273946331ffd37 WatchSource:0}: Error finding container eac5eac3653c12bbaad9188fbf0f8cd5684196edb4288f5fe7273946331ffd37: Status 404 returned error can't find the container with id eac5eac3653c12bbaad9188fbf0f8cd5684196edb4288f5fe7273946331ffd37 Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.638338 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.654259 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.655327 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.667761 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fchf8" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.675322 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.678328 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.695445 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.703385 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.713690 4874 request.go:700] Waited for 1.923024637s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.716763 4874 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.735143 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.735557 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.758950 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.776220 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.790755 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6rjq\" (UniqueName: \"kubernetes.io/projected/f81c1252-cad4-4b23-8b84-c5385c96641c-kube-api-access-n6rjq\") pod \"dns-operator-744455d44c-2gktq\" (UID: \"f81c1252-cad4-4b23-8b84-c5385c96641c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.806791 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfpkg\" (UniqueName: \"kubernetes.io/projected/45e5b8ae-4eef-4449-b844-574c3b737ad4-kube-api-access-mfpkg\") pod \"oauth-openshift-558db77b4-2kf8w\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.816044 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6kmh\" (UniqueName: \"kubernetes.io/projected/5097339d-dd80-4346-940d-097455cd8579-kube-api-access-b6kmh\") pod \"console-operator-58897d9998-7mw6t\" (UID: \"5097339d-dd80-4346-940d-097455cd8579\") " pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.832777 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.837743 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn8wr\" (UniqueName: \"kubernetes.io/projected/ea652a36-9ddd-4c88-8e96-1f66c3ef0edf-kube-api-access-fn8wr\") pod \"kube-storage-version-migrator-operator-b67b599dd-stl9h\" (UID: \"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.842045 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.853649 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb25d\" (UniqueName: \"kubernetes.io/projected/5a72263e-c92a-4d11-9751-aa4240676a0e-kube-api-access-zb25d\") pod \"openshift-apiserver-operator-796bbdcf4f-vn6v2\" (UID: \"5a72263e-c92a-4d11-9751-aa4240676a0e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.870996 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8b7t\" (UniqueName: \"kubernetes.io/projected/58504546-67ad-4e0d-88ea-53fcf0684659-kube-api-access-s8b7t\") pod \"machine-config-controller-84d6567774-pptdb\" (UID: \"58504546-67ad-4e0d-88ea-53fcf0684659\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.887303 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.889067 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/639cdaa5-0dc8-4709-80c7-37d8c71e6eda-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hkn4z\" (UID: \"639cdaa5-0dc8-4709-80c7-37d8c71e6eda\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.892827 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.898402 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.905486 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.908755 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgssd\" (UniqueName: \"kubernetes.io/projected/435efe4c-197a-43a2-9033-8dc57e98c006-kube-api-access-mgssd\") pod \"machine-config-operator-74547568cd-jjhdq\" (UID: \"435efe4c-197a-43a2-9033-8dc57e98c006\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.914401 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.929986 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.935813 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5863402f-d384-4df7-96b5-a3ae67599f4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84mb2\" (UID: \"5863402f-d384-4df7-96b5-a3ae67599f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.953835 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs5lk\" (UniqueName: \"kubernetes.io/projected/70fceb62-f510-491f-a04c-0a2efd5439f7-kube-api-access-cs5lk\") pod \"machine-api-operator-5694c8668f-rxw56\" (UID: \"70fceb62-f510-491f-a04c-0a2efd5439f7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.978930 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cghk\" (UniqueName: \"kubernetes.io/projected/cfccd2a3-037d-4b17-a269-952847ad533a-kube-api-access-7cghk\") pod \"console-f9d7485db-6wpw5\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:31 crc kubenswrapper[4874]: I0217 16:05:31.987417 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16120c4a-9a38-4d39-b5ed-784978d4521f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jbd4d\" (UID: \"16120c4a-9a38-4d39-b5ed-784978d4521f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.007833 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb99r\" (UniqueName: \"kubernetes.io/projected/59309c0f-86d9-4425-8752-5e57fbbf9827-kube-api-access-jb99r\") pod \"openshift-controller-manager-operator-756b6f6bc6-dpvgb\" (UID: \"59309c0f-86d9-4425-8752-5e57fbbf9827\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.036131 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/168d1b1d-27b6-4e4e-82b4-546836063edd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.048791 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmz2b\" (UniqueName: \"kubernetes.io/projected/13ed2de5-5f56-4d15-8ded-3e5bd15b511a-kube-api-access-pmz2b\") pod \"service-ca-operator-777779d784-8w5fg\" (UID: \"13ed2de5-5f56-4d15-8ded-3e5bd15b511a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.073654 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9llt\" (UniqueName: \"kubernetes.io/projected/8ee6ec56-3fff-4eb3-855a-5e597e4bbba3-kube-api-access-m9llt\") pod \"olm-operator-6b444d44fb-hk8nz\" (UID: \"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.086785 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.096899 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcwqh\" (UniqueName: \"kubernetes.io/projected/168d1b1d-27b6-4e4e-82b4-546836063edd-kube-api-access-mcwqh\") pod \"ingress-operator-5b745b69d9-6g2fs\" (UID: \"168d1b1d-27b6-4e4e-82b4-546836063edd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.100001 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.103942 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.106343 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.122333 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.142550 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlv4x\" (UniqueName: \"kubernetes.io/projected/1d79a8a2-fadf-4c52-b67b-3091a20cace5-kube-api-access-wlv4x\") pod \"package-server-manager-789f6589d5-d7k8j\" (UID: \"1d79a8a2-fadf-4c52-b67b-3091a20cace5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.143577 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfcv6\" (UniqueName: \"kubernetes.io/projected/6700af5a-0927-417d-a623-e5bf764df51b-kube-api-access-pfcv6\") pod \"openshift-config-operator-7777fb866f-vxvz6\" (UID: \"6700af5a-0927-417d-a623-e5bf764df51b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.149701 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.149823 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gskds\" (UniqueName: \"kubernetes.io/projected/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-kube-api-access-gskds\") pod \"marketplace-operator-79b997595-2w9mt\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.166747 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmn8\" (UniqueName: \"kubernetes.io/projected/52e48cb6-3564-41f7-8030-f54482605065-kube-api-access-mjmn8\") pod \"control-plane-machine-set-operator-78cbb6b69f-mfmbh\" (UID: \"52e48cb6-3564-41f7-8030-f54482605065\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.200621 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65j8s\" (UniqueName: \"kubernetes.io/projected/ea9ddc77-8d24-4929-96c7-238e58e40bbe-kube-api-access-65j8s\") pod \"ingress-canary-pm6wc\" (UID: \"ea9ddc77-8d24-4929-96c7-238e58e40bbe\") " pod="openshift-ingress-canary/ingress-canary-pm6wc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.217429 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h49z\" (UniqueName: \"kubernetes.io/projected/ea2cbe06-9c98-4418-9122-a98dbae2460d-kube-api-access-4h49z\") pod \"catalog-operator-68c6474976-qnm5s\" (UID: \"ea2cbe06-9c98-4418-9122-a98dbae2460d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.219321 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.234840 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d"] Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.237023 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk7p7\" (UniqueName: \"kubernetes.io/projected/600c5b21-a46e-4644-8f1d-55fa0b4d06dd-kube-api-access-bk7p7\") pod \"dns-default-jt8g9\" (UID: \"600c5b21-a46e-4644-8f1d-55fa0b4d06dd\") " pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.238740 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fchf8"] Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.240094 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8"] Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.242131 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.248400 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.248442 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfnp6\" (UniqueName: \"kubernetes.io/projected/3b2a3365-4901-45b8-b528-0961dad4cf66-kube-api-access-qfnp6\") pod \"collect-profiles-29522400-b59b9\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.255197 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.266958 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7pv\" (UniqueName: \"kubernetes.io/projected/24203bde-9d97-4574-a15e-56bd86395bf4-kube-api-access-pq7pv\") pod \"packageserver-d55dfcdfc-bcj8r\" (UID: \"24203bde-9d97-4574-a15e-56bd86395bf4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.286553 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjhb8\" (UniqueName: \"kubernetes.io/projected/38151ea5-4428-4a24-95ce-a02e586a83ce-kube-api-access-pjhb8\") pod \"authentication-operator-69f744f599-ggrcz\" (UID: \"38151ea5-4428-4a24-95ce-a02e586a83ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:32 crc kubenswrapper[4874]: W0217 16:05:32.292310 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe182c78_fa2c_49ab_9ec4_698854f3ca51.slice/crio-5b4b4c31394151ba9123c830cf97d25352e8cbd1ae6695ca44e8c315023f83df WatchSource:0}: Error finding container 5b4b4c31394151ba9123c830cf97d25352e8cbd1ae6695ca44e8c315023f83df: Status 404 returned error can't find the container with id 5b4b4c31394151ba9123c830cf97d25352e8cbd1ae6695ca44e8c315023f83df Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.313729 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.321435 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmjqp\" (UniqueName: \"kubernetes.io/projected/dbdeec10-9456-46f3-a08b-6fe084f5865e-kube-api-access-jmjqp\") pod \"multus-admission-controller-857f4d67dd-64tmb\" (UID: \"dbdeec10-9456-46f3-a08b-6fe084f5865e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.330852 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxsj\" (UniqueName: \"kubernetes.io/projected/d985a553-61af-46e7-a559-16dd4629929c-kube-api-access-8bxsj\") pod \"migrator-59844c95c7-hpbgn\" (UID: \"d985a553-61af-46e7-a559-16dd4629929c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.345304 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" event={"ID":"d86c736d-5bee-4763-ac11-c9a2d4bce6d4","Type":"ContainerStarted","Data":"81b928e8eb343615ef9e0fcb3252fca1a5a2de6dd6ffe2ad81b3d31226a34284"} Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.345342 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" event={"ID":"d86c736d-5bee-4763-ac11-c9a2d4bce6d4","Type":"ContainerStarted","Data":"6f383833f8aedc626791c2dcd60bf4ad1e556941f001b170ad18ee04990c550f"} Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.345352 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" event={"ID":"d86c736d-5bee-4763-ac11-c9a2d4bce6d4","Type":"ContainerStarted","Data":"eac5eac3653c12bbaad9188fbf0f8cd5684196edb4288f5fe7273946331ffd37"} Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.346328 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.349962 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" event={"ID":"be182c78-fa2c-49ab-9ec4-698854f3ca51","Type":"ContainerStarted","Data":"5b4b4c31394151ba9123c830cf97d25352e8cbd1ae6695ca44e8c315023f83df"} Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.353823 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldbx6\" (UniqueName: \"kubernetes.io/projected/4f63eb58-f30b-41f4-b569-a7906802fcb4-kube-api-access-ldbx6\") pod \"service-ca-9c57cc56f-k7w57\" (UID: \"4f63eb58-f30b-41f4-b569-a7906802fcb4\") " pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.354103 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.358664 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fchf8" event={"ID":"43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81","Type":"ContainerStarted","Data":"2b69200627fb61a1eef59aba5e64aa149c510947ab6fef59719bf9141e6fad5f"} Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.370965 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.374394 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqg92\" (UniqueName: \"kubernetes.io/projected/ac5f5138-7075-4b42-b2f7-7eb4b7c18fea-kube-api-access-gqg92\") pod \"router-default-5444994796-pmtgc\" (UID: \"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea\") " pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.379140 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.381200 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-v9tn7"] Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.388227 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws"] Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.396357 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.441534 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.441910 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-registry-certificates\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.443317 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.443452 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbe005ea-f697-473a-8578-91453c7a8331-ca-trust-extracted\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.443631 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-bound-sa-token\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.443765 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pknpq\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-kube-api-access-pknpq\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.443892 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-trusted-ca\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.444424 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbe005ea-f697-473a-8578-91453c7a8331-installation-pull-secrets\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.444541 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-registry-tls\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.442068 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pm6wc" Feb 17 16:05:32 crc kubenswrapper[4874]: E0217 16:05:32.445114 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:32.945059656 +0000 UTC m=+143.239448217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.445964 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.534986 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.545892 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:32 crc kubenswrapper[4874]: E0217 16:05:32.546059 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.046017559 +0000 UTC m=+143.340406130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.546366 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-registry-certificates\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.546511 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-registration-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.546567 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgbz\" (UniqueName: \"kubernetes.io/projected/f43ef484-ca5b-4c21-8959-d79471c4b21d-kube-api-access-wzgbz\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.546672 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-plugins-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.546736 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.546787 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbe005ea-f697-473a-8578-91453c7a8331-ca-trust-extracted\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.546845 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-bound-sa-token\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: E0217 16:05:32.547589 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.047578357 +0000 UTC m=+143.341966918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.549056 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-csi-data-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.549156 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pknpq\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-kube-api-access-pknpq\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.549298 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-mountpoint-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.549447 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbe005ea-f697-473a-8578-91453c7a8331-ca-trust-extracted\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.549646 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7qwr\" (UniqueName: \"kubernetes.io/projected/84e3afa3-f1d6-4a4a-aa50-bb9a238f6488-kube-api-access-w7qwr\") pod \"machine-config-server-x5p5n\" (UID: \"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488\") " pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.549703 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-trusted-ca\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.550810 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/84e3afa3-f1d6-4a4a-aa50-bb9a238f6488-certs\") pod \"machine-config-server-x5p5n\" (UID: \"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488\") " pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.550903 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbe005ea-f697-473a-8578-91453c7a8331-installation-pull-secrets\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.551315 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-registry-tls\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.551637 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-trusted-ca\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.552153 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/84e3afa3-f1d6-4a4a-aa50-bb9a238f6488-node-bootstrap-token\") pod \"machine-config-server-x5p5n\" (UID: \"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488\") " pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.552372 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-socket-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.553858 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-registry-certificates\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.557429 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbe005ea-f697-473a-8578-91453c7a8331-installation-pull-secrets\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.558378 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-registry-tls\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.562235 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.601943 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-bound-sa-token\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.628771 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pknpq\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-kube-api-access-pknpq\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.656796 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:32 crc kubenswrapper[4874]: E0217 16:05:32.657002 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.156970675 +0000 UTC m=+143.451359236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657386 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-csi-data-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657415 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-mountpoint-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657436 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7qwr\" (UniqueName: \"kubernetes.io/projected/84e3afa3-f1d6-4a4a-aa50-bb9a238f6488-kube-api-access-w7qwr\") pod \"machine-config-server-x5p5n\" (UID: \"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488\") " pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657459 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/84e3afa3-f1d6-4a4a-aa50-bb9a238f6488-certs\") pod \"machine-config-server-x5p5n\" (UID: \"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488\") " pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657597 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/84e3afa3-f1d6-4a4a-aa50-bb9a238f6488-node-bootstrap-token\") pod \"machine-config-server-x5p5n\" (UID: \"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488\") " pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657618 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-socket-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657660 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-registration-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657680 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzgbz\" (UniqueName: \"kubernetes.io/projected/f43ef484-ca5b-4c21-8959-d79471c4b21d-kube-api-access-wzgbz\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657704 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-plugins-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657732 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657794 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-mountpoint-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657827 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-csi-data-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.657922 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-plugins-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: E0217 16:05:32.657992 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.157980329 +0000 UTC m=+143.452368890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.658017 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-registration-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.658018 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f43ef484-ca5b-4c21-8959-d79471c4b21d-socket-dir\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.661894 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.663240 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/84e3afa3-f1d6-4a4a-aa50-bb9a238f6488-certs\") pod \"machine-config-server-x5p5n\" (UID: \"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488\") " pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.663946 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/84e3afa3-f1d6-4a4a-aa50-bb9a238f6488-node-bootstrap-token\") pod \"machine-config-server-x5p5n\" (UID: \"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488\") " pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.673718 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cw7tb"] Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.689482 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzgbz\" (UniqueName: \"kubernetes.io/projected/f43ef484-ca5b-4c21-8959-d79471c4b21d-kube-api-access-wzgbz\") pod \"csi-hostpathplugin-n8bpc\" (UID: \"f43ef484-ca5b-4c21-8959-d79471c4b21d\") " pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.690369 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-s464s"] Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.711897 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7qwr\" (UniqueName: \"kubernetes.io/projected/84e3afa3-f1d6-4a4a-aa50-bb9a238f6488-kube-api-access-w7qwr\") pod \"machine-config-server-x5p5n\" (UID: \"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488\") " pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: W0217 16:05:32.735500 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fa024ca_53bd_4aeb_a216_26ed6044cf24.slice/crio-6e55d2385ee4c7c25977516432b40b0a21a78b5de13e9f3d5c473a5042306cd5 WatchSource:0}: Error finding container 6e55d2385ee4c7c25977516432b40b0a21a78b5de13e9f3d5c473a5042306cd5: Status 404 returned error can't find the container with id 6e55d2385ee4c7c25977516432b40b0a21a78b5de13e9f3d5c473a5042306cd5 Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.748027 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-x5p5n" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.759125 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:32 crc kubenswrapper[4874]: E0217 16:05:32.759603 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.259584318 +0000 UTC m=+143.553972879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.786247 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.863466 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:32 crc kubenswrapper[4874]: E0217 16:05:32.863896 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.363885333 +0000 UTC m=+143.658273894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:32 crc kubenswrapper[4874]: I0217 16:05:32.964514 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:32 crc kubenswrapper[4874]: E0217 16:05:32.964730 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.464701153 +0000 UTC m=+143.759089764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.066920 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.067910 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.56789744 +0000 UTC m=+143.862286001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.168382 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.168708 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.668671019 +0000 UTC m=+143.963059590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.168880 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.169307 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.669291164 +0000 UTC m=+143.963679805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.272060 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.272457 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.77242772 +0000 UTC m=+144.066816291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.272576 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.272895 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.772883801 +0000 UTC m=+144.067272362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.292588 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7mw6t"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.321416 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.341213 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.375861 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.376412 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.876393516 +0000 UTC m=+144.170782077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.387380 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2gktq"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.388638 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.392408 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" event={"ID":"9fa024ca-53bd-4aeb-a216-26ed6044cf24","Type":"ContainerStarted","Data":"a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.392433 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" event={"ID":"9fa024ca-53bd-4aeb-a216-26ed6044cf24","Type":"ContainerStarted","Data":"6e55d2385ee4c7c25977516432b40b0a21a78b5de13e9f3d5c473a5042306cd5"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.393111 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.397589 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" event={"ID":"9d6e7ed7-868d-4e75-9d22-7f38d441aadf","Type":"ContainerStarted","Data":"90c2cc42917e75bf8d08132eb7456de747cd86e0be5388ed6ed8a9f74e65fab9"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.399875 4874 generic.go:334] "Generic (PLEG): container finished" podID="f6c5ab25-b40a-4e91-b4e6-811ec8093a2a" containerID="eee4f0e5cd6ef5540d5a106c01d4556859bfdaba157d05df809172246dabc2dd" exitCode=0 Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.399955 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s464s" event={"ID":"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a","Type":"ContainerDied","Data":"eee4f0e5cd6ef5540d5a106c01d4556859bfdaba157d05df809172246dabc2dd"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.400009 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s464s" event={"ID":"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a","Type":"ContainerStarted","Data":"70fcdc460e595a6438aff651b32435a500eabaaac1940dd8b3487132f25d5477"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.405633 4874 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cw7tb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.405683 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" podUID="9fa024ca-53bd-4aeb-a216-26ed6044cf24" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.409156 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.411178 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.414489 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2kf8w"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.414526 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.434531 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" event={"ID":"e4493714-3270-4b3b-8b07-3d9faa92b110","Type":"ContainerStarted","Data":"305e0062614248ab9de1a4bd2c6775527985c4c4997af86fd589490356aefbad"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.434575 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" event={"ID":"e4493714-3270-4b3b-8b07-3d9faa92b110","Type":"ContainerStarted","Data":"5ac11f7ea93bbc7b339564aed5bea23023914fc6f4af32efda5741e890b1221d"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.434588 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" event={"ID":"e4493714-3270-4b3b-8b07-3d9faa92b110","Type":"ContainerStarted","Data":"d72ed34dabc1a7781f7f74b00c81ee02162b453bb505a1cc0cac124c2222eb4a"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.440487 4874 csr.go:261] certificate signing request csr-gbjgr is approved, waiting to be issued Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.449650 4874 csr.go:257] certificate signing request csr-gbjgr is issued Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.449987 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" event={"ID":"be182c78-fa2c-49ab-9ec4-698854f3ca51","Type":"ContainerStarted","Data":"d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.450696 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.463125 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-x5p5n" event={"ID":"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488","Type":"ContainerStarted","Data":"f4c8effe33df93f08a99d2191dc0ffe987f3b18d8b6bce27d887850fb3ca6a27"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.463174 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-x5p5n" event={"ID":"84e3afa3-f1d6-4a4a-aa50-bb9a238f6488","Type":"ContainerStarted","Data":"aae595162e87d19b83497aaca3516497089e995e2912379d1060e2b6e2947abd"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.477837 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.478663 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:33.978652131 +0000 UTC m=+144.273040692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.479269 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7mw6t" event={"ID":"5097339d-dd80-4346-940d-097455cd8579","Type":"ContainerStarted","Data":"2a9285256593f13efb303d205cee0965c3ef3112eee64b5223f08a6ab91fd4f0"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.485191 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" event={"ID":"d9790481-730e-4e06-a338-bd615b4039e2","Type":"ContainerStarted","Data":"bb4a2f4dec96b2e6b4952c03ee22357c4c35ae3bc4396a737f9c88610eb32506"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.485262 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" event={"ID":"d9790481-730e-4e06-a338-bd615b4039e2","Type":"ContainerStarted","Data":"52a0e4b5e920680a3a7bbe2271f714bfb3ae7030e2fd2e786383e600c39f8250"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.490102 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pmtgc" event={"ID":"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea","Type":"ContainerStarted","Data":"3b1660c18f45339dc32c23eab3984105a371cbaf629ce48eb157b96690076b85"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.490135 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pmtgc" event={"ID":"ac5f5138-7075-4b42-b2f7-7eb4b7c18fea","Type":"ContainerStarted","Data":"372a9724cee34344e97a9f4a4bca0aea82b9b05d0678766c32c0029440765b15"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.497697 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" event={"ID":"5c2bc1be-9874-4d6c-b887-4a658d99a909","Type":"ContainerStarted","Data":"56de1ce0f8ecf3beba4ac1bf42768778b54bcbf24e68abdfc686143cd4e40e11"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.497731 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" event={"ID":"5c2bc1be-9874-4d6c-b887-4a658d99a909","Type":"ContainerStarted","Data":"d9fab7d25a9dfcddb3b532c736d91b4470cb48ae3814cce011b4aaa45079daaf"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.515827 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fchf8" event={"ID":"43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81","Type":"ContainerStarted","Data":"6dd6fb0b55749c29cca4ab08d7f6f01dda0d0193d02a243bb3be3b54077cf517"} Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.516892 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fchf8" Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.527312 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-fchf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.527500 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fchf8" podUID="43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.580590 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.582135 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.082119024 +0000 UTC m=+144.376507585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.613266 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.613317 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.613870 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6wpw5"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.619113 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-64tmb"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.624608 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.631686 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.671610 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.673896 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.674297 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.682241 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.682566 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.182554415 +0000 UTC m=+144.476942976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.683175 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.689333 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.696221 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.699337 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.702489 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.705521 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rxw56"] Feb 17 16:05:33 crc kubenswrapper[4874]: W0217 16:05:33.706190 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6700af5a_0927_417d_a623_e5bf764df51b.slice/crio-f3634f82359e30925540d995eb100629668972176d62f2db2dcc61ef8136182e WatchSource:0}: Error finding container f3634f82359e30925540d995eb100629668972176d62f2db2dcc61ef8136182e: Status 404 returned error can't find the container with id f3634f82359e30925540d995eb100629668972176d62f2db2dcc61ef8136182e Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.725092 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n8bpc"] Feb 17 16:05:33 crc kubenswrapper[4874]: W0217 16:05:33.728429 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59309c0f_86d9_4425_8752_5e57fbbf9827.slice/crio-7855aabd00107113c9b6fddf91f0f6af37f937453ce4ecd3b197f3e09eed5def WatchSource:0}: Error finding container 7855aabd00107113c9b6fddf91f0f6af37f937453ce4ecd3b197f3e09eed5def: Status 404 returned error can't find the container with id 7855aabd00107113c9b6fddf91f0f6af37f937453ce4ecd3b197f3e09eed5def Feb 17 16:05:33 crc kubenswrapper[4874]: W0217 16:05:33.735203 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea2cbe06_9c98_4418_9122_a98dbae2460d.slice/crio-585fa6391f48d0f4f4b02bfb09de35691f43821bce9947f3d7274c541b462892 WatchSource:0}: Error finding container 585fa6391f48d0f4f4b02bfb09de35691f43821bce9947f3d7274c541b462892: Status 404 returned error can't find the container with id 585fa6391f48d0f4f4b02bfb09de35691f43821bce9947f3d7274c541b462892 Feb 17 16:05:33 crc kubenswrapper[4874]: W0217 16:05:33.741521 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13ed2de5_5f56_4d15_8ded_3e5bd15b511a.slice/crio-f2cc12b1cd41d58fa7d30dd6701498fa9c91dbaac3e267b951c99ad09b2a4290 WatchSource:0}: Error finding container f2cc12b1cd41d58fa7d30dd6701498fa9c91dbaac3e267b951c99ad09b2a4290: Status 404 returned error can't find the container with id f2cc12b1cd41d58fa7d30dd6701498fa9c91dbaac3e267b951c99ad09b2a4290 Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.743754 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:05:33 crc kubenswrapper[4874]: W0217 16:05:33.750354 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5863402f_d384_4df7_96b5_a3ae67599f4c.slice/crio-4531ebd1b29712d0649fe003f50d033b0115b23ad5ae64bf5bc2ee6fde2699da WatchSource:0}: Error finding container 4531ebd1b29712d0649fe003f50d033b0115b23ad5ae64bf5bc2ee6fde2699da: Status 404 returned error can't find the container with id 4531ebd1b29712d0649fe003f50d033b0115b23ad5ae64bf5bc2ee6fde2699da Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.784583 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.784872 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.284858141 +0000 UTC m=+144.579246702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.798621 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.874364 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k7w57"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.892443 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.892830 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.392816094 +0000 UTC m=+144.687204655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4874]: W0217 16:05:33.900398 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24203bde_9d97_4574_a15e_56bd86395bf4.slice/crio-cd27008a3f3249b6b2973b7744a0d7f21d1547cd1f82deff44644b47e79fccf8 WatchSource:0}: Error finding container cd27008a3f3249b6b2973b7744a0d7f21d1547cd1f82deff44644b47e79fccf8: Status 404 returned error can't find the container with id cd27008a3f3249b6b2973b7744a0d7f21d1547cd1f82deff44644b47e79fccf8 Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.905953 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.910555 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2w9mt"] Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.912019 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pm6wc"] Feb 17 16:05:33 crc kubenswrapper[4874]: W0217 16:05:33.931954 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f63eb58_f30b_41f4_b569_a7906802fcb4.slice/crio-3d6b13f71af4207eb271d5bdb0ad869515f7a0260d3fcc10ec140a5e9f81186b WatchSource:0}: Error finding container 3d6b13f71af4207eb271d5bdb0ad869515f7a0260d3fcc10ec140a5e9f81186b: Status 404 returned error can't find the container with id 3d6b13f71af4207eb271d5bdb0ad869515f7a0260d3fcc10ec140a5e9f81186b Feb 17 16:05:33 crc kubenswrapper[4874]: W0217 16:05:33.937460 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b2a3365_4901_45b8_b528_0961dad4cf66.slice/crio-1963d2f9926ce218055f4f121929fc8ae9315ee8ff98913d437bb3916ba80307 WatchSource:0}: Error finding container 1963d2f9926ce218055f4f121929fc8ae9315ee8ff98913d437bb3916ba80307: Status 404 returned error can't find the container with id 1963d2f9926ce218055f4f121929fc8ae9315ee8ff98913d437bb3916ba80307 Feb 17 16:05:33 crc kubenswrapper[4874]: W0217 16:05:33.976647 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c21c3a4_9603_4cd0_a5e3_263aa51d678d.slice/crio-6a23b3ab0783415a98358155cec6285294e8fc21608480b693895b2eab18a251 WatchSource:0}: Error finding container 6a23b3ab0783415a98358155cec6285294e8fc21608480b693895b2eab18a251: Status 404 returned error can't find the container with id 6a23b3ab0783415a98358155cec6285294e8fc21608480b693895b2eab18a251 Feb 17 16:05:33 crc kubenswrapper[4874]: I0217 16:05:33.993626 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:33 crc kubenswrapper[4874]: E0217 16:05:33.994001 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.493986013 +0000 UTC m=+144.788374574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.037015 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j"] Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.076317 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jt8g9"] Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.094701 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:34 crc kubenswrapper[4874]: E0217 16:05:34.094990 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.594978417 +0000 UTC m=+144.889366968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.106372 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs"] Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.134936 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ggrcz"] Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.195296 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4874]: E0217 16:05:34.195589 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.695575341 +0000 UTC m=+144.989963902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.207550 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fchf8" podStartSLOduration=124.207534672 podStartE2EDuration="2m4.207534672s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.20458352 +0000 UTC m=+144.498972081" watchObservedRunningTime="2026-02-17 16:05:34.207534672 +0000 UTC m=+144.501923233" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.249772 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pmtgc" podStartSLOduration=123.249753888 podStartE2EDuration="2m3.249753888s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.248280082 +0000 UTC m=+144.542668653" watchObservedRunningTime="2026-02-17 16:05:34.249753888 +0000 UTC m=+144.544142449" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.285734 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-x5p5n" podStartSLOduration=5.285717311 podStartE2EDuration="5.285717311s" podCreationTimestamp="2026-02-17 16:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.281187931 +0000 UTC m=+144.575576502" watchObservedRunningTime="2026-02-17 16:05:34.285717311 +0000 UTC m=+144.580105872" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.298393 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:34 crc kubenswrapper[4874]: E0217 16:05:34.298733 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.798720097 +0000 UTC m=+145.093108668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.329016 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-v9tn7" podStartSLOduration=124.328995613 podStartE2EDuration="2m4.328995613s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.327305502 +0000 UTC m=+144.621694083" watchObservedRunningTime="2026-02-17 16:05:34.328995613 +0000 UTC m=+144.623384174" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.399503 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4874]: E0217 16:05:34.401223 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.901206468 +0000 UTC m=+145.195595029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.419955 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" podStartSLOduration=124.419935343 podStartE2EDuration="2m4.419935343s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.37042988 +0000 UTC m=+144.664818441" watchObservedRunningTime="2026-02-17 16:05:34.419935343 +0000 UTC m=+144.714323904" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.448469 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rxt2d" podStartSLOduration=124.448455806 podStartE2EDuration="2m4.448455806s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.44698187 +0000 UTC m=+144.741370431" watchObservedRunningTime="2026-02-17 16:05:34.448455806 +0000 UTC m=+144.742844367" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.451158 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 16:00:33 +0000 UTC, rotation deadline is 2027-01-08 03:45:24.944256875 +0000 UTC Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.451215 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7787h39m50.493043542s for next certificate rotation Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.488662 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4j5ws" podStartSLOduration=124.488646113 podStartE2EDuration="2m4.488646113s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.486424499 +0000 UTC m=+144.780813060" watchObservedRunningTime="2026-02-17 16:05:34.488646113 +0000 UTC m=+144.783034664" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.501759 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:34 crc kubenswrapper[4874]: E0217 16:05:34.502112 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.002097499 +0000 UTC m=+145.296486060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.528425 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zrr5t" podStartSLOduration=124.528409059 podStartE2EDuration="2m4.528409059s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.527492606 +0000 UTC m=+144.821881167" watchObservedRunningTime="2026-02-17 16:05:34.528409059 +0000 UTC m=+144.822797620" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.564127 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" podStartSLOduration=123.564108266 podStartE2EDuration="2m3.564108266s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.562378144 +0000 UTC m=+144.856766695" watchObservedRunningTime="2026-02-17 16:05:34.564108266 +0000 UTC m=+144.858496827" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.585135 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" event={"ID":"f81c1252-cad4-4b23-8b84-c5385c96641c","Type":"ContainerStarted","Data":"d4d939a3b36a2c9e92e1b5e3f6dac011d3858a2ff8711a02c95b6ed042574be4"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.585175 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" event={"ID":"f81c1252-cad4-4b23-8b84-c5385c96641c","Type":"ContainerStarted","Data":"e9048ffdab7af5c7e2924e0af11f0e08e7af92ed63a45615838c3e98c1476fa2"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.602503 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4874]: E0217 16:05:34.602874 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.102845087 +0000 UTC m=+145.397233648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.606801 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" event={"ID":"16120c4a-9a38-4d39-b5ed-784978d4521f","Type":"ContainerStarted","Data":"31cd5f0421788febc2c32e6e890df658408259a12454f417343e10484f88de2e"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.606837 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" event={"ID":"16120c4a-9a38-4d39-b5ed-784978d4521f","Type":"ContainerStarted","Data":"5d622ba67af41376f3c148e1164a813d49e2cfe73656677e3c259803a6df42aa"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.628604 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jbd4d" podStartSLOduration=123.628584173 podStartE2EDuration="2m3.628584173s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.625526419 +0000 UTC m=+144.919914980" watchObservedRunningTime="2026-02-17 16:05:34.628584173 +0000 UTC m=+144.922972724" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.653868 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" event={"ID":"38151ea5-4428-4a24-95ce-a02e586a83ce","Type":"ContainerStarted","Data":"14f2519f85337789a184d47325528c446677b5ab00721bf2a7ad2e1b51014c7b"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.666283 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:34 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:34 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:34 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.666327 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.668571 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" event={"ID":"435efe4c-197a-43a2-9033-8dc57e98c006","Type":"ContainerStarted","Data":"e355817330d426d172d7c871142c65efb5108da40d58acf3d3c5b9ecdd8d52ea"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.668614 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" event={"ID":"435efe4c-197a-43a2-9033-8dc57e98c006","Type":"ContainerStarted","Data":"5eb14377de1f878c15545fe659c6c37ec61a6b483e5931ebf982c86901f641d2"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.680025 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" event={"ID":"3b2a3365-4901-45b8-b528-0961dad4cf66","Type":"ContainerStarted","Data":"1963d2f9926ce218055f4f121929fc8ae9315ee8ff98913d437bb3916ba80307"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.696815 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn" event={"ID":"d985a553-61af-46e7-a559-16dd4629929c","Type":"ContainerStarted","Data":"10aa356ca0e5d35a915a8255ff4295db6e26d362b008c9f66ddaedc40ffb2c84"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.696859 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn" event={"ID":"d985a553-61af-46e7-a559-16dd4629929c","Type":"ContainerStarted","Data":"4c54e7c718a8c6f80f9b7d47177204df8c88491688b82258b5b29a12fd04c771"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.699162 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" event={"ID":"58504546-67ad-4e0d-88ea-53fcf0684659","Type":"ContainerStarted","Data":"dfe1f99ec6a6a0b4ab258be745f6a17854f2ca464334a937d7138155357e6abd"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.699186 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" event={"ID":"58504546-67ad-4e0d-88ea-53fcf0684659","Type":"ContainerStarted","Data":"f19f77197342e8a321b78884e88b9f982ffd5610a5b08feb2333a67c99ca651a"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.706467 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:34 crc kubenswrapper[4874]: E0217 16:05:34.707935 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.207923451 +0000 UTC m=+145.502312012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.743372 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" event={"ID":"1d79a8a2-fadf-4c52-b67b-3091a20cace5","Type":"ContainerStarted","Data":"3f38a31a111e3df496ecbe023a8c493fb8eba12a8329e9b7adaac422b063130c"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.750592 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" event={"ID":"24203bde-9d97-4574-a15e-56bd86395bf4","Type":"ContainerStarted","Data":"cd27008a3f3249b6b2973b7744a0d7f21d1547cd1f82deff44644b47e79fccf8"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.752041 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.760225 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" event={"ID":"f43ef484-ca5b-4c21-8959-d79471c4b21d","Type":"ContainerStarted","Data":"f99768cc8a802a0ff3cdc6e3ef34358ebbc0d4cb9ac1de6da712a8a98fe8fe43"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.763224 4874 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bcj8r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.763645 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" podUID="24203bde-9d97-4574-a15e-56bd86395bf4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.786013 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" event={"ID":"6700af5a-0927-417d-a623-e5bf764df51b","Type":"ContainerStarted","Data":"ec50fc5503a979da7ea9a0095545afe11dfae6d42fd4d52991e1e54470137d04"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.786059 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" event={"ID":"6700af5a-0927-417d-a623-e5bf764df51b","Type":"ContainerStarted","Data":"f3634f82359e30925540d995eb100629668972176d62f2db2dcc61ef8136182e"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.796246 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" event={"ID":"ea2cbe06-9c98-4418-9122-a98dbae2460d","Type":"ContainerStarted","Data":"585fa6391f48d0f4f4b02bfb09de35691f43821bce9947f3d7274c541b462892"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.796734 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.803630 4874 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qnm5s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.803674 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" podUID="ea2cbe06-9c98-4418-9122-a98dbae2460d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.809030 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4874]: E0217 16:05:34.809301 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.309277224 +0000 UTC m=+145.603665785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.813624 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" podStartSLOduration=123.813606689 podStartE2EDuration="2m3.813606689s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.773394252 +0000 UTC m=+145.067782813" watchObservedRunningTime="2026-02-17 16:05:34.813606689 +0000 UTC m=+145.107995250" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.839456 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" event={"ID":"52e48cb6-3564-41f7-8030-f54482605065","Type":"ContainerStarted","Data":"20f60a21d5331d707dcf87fa7c46ce1aaf2d3eba037f3aecaeb1fbd40db3dd47"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.839504 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" event={"ID":"52e48cb6-3564-41f7-8030-f54482605065","Type":"ContainerStarted","Data":"9f1ef92ada46aab31c0ae1b0a8c2e61f1438bedcab48391ff6b99201abba6b78"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.840979 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" podStartSLOduration=123.840963904 podStartE2EDuration="2m3.840963904s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.840370649 +0000 UTC m=+145.134759210" watchObservedRunningTime="2026-02-17 16:05:34.840963904 +0000 UTC m=+145.135352465" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.853432 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jt8g9" event={"ID":"600c5b21-a46e-4644-8f1d-55fa0b4d06dd","Type":"ContainerStarted","Data":"aa2f3215fb32f01ec0bb7fa2dee487e386c0f3d11c4a1b52d16ee6ca92c992d8"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.867828 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7mw6t" event={"ID":"5097339d-dd80-4346-940d-097455cd8579","Type":"ContainerStarted","Data":"00b5fe718bb4d84de342e6c8109f9973b34dc89dafecea61fc3b0a0d1f57d1c8"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.869019 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.891640 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mfmbh" podStartSLOduration=123.891623325 podStartE2EDuration="2m3.891623325s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.868295528 +0000 UTC m=+145.162684089" watchObservedRunningTime="2026-02-17 16:05:34.891623325 +0000 UTC m=+145.186011876" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.892860 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7mw6t" podStartSLOduration=124.892839944 podStartE2EDuration="2m4.892839944s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.891534212 +0000 UTC m=+145.185922773" watchObservedRunningTime="2026-02-17 16:05:34.892839944 +0000 UTC m=+145.187228505" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.906947 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" event={"ID":"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3","Type":"ContainerStarted","Data":"ce27b0e46aa93b6a9c33463264863d302c3c4e366799c3ef39268e8b029ceb50"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.906997 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" event={"ID":"8ee6ec56-3fff-4eb3-855a-5e597e4bbba3","Type":"ContainerStarted","Data":"ef52fc71d6b3d772b2cdfef00450be84634cf781f34fe1c18b02079e98fc1d4e"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.912113 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:34 crc kubenswrapper[4874]: E0217 16:05:34.921390 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.421369487 +0000 UTC m=+145.715758048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.930432 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.948431 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" podStartSLOduration=123.948416605 podStartE2EDuration="2m3.948416605s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.94700297 +0000 UTC m=+145.241391531" watchObservedRunningTime="2026-02-17 16:05:34.948416605 +0000 UTC m=+145.242805166" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.950175 4874 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hk8nz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.950223 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" podUID="8ee6ec56-3fff-4eb3-855a-5e597e4bbba3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.977762 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" event={"ID":"168d1b1d-27b6-4e4e-82b4-546836063edd","Type":"ContainerStarted","Data":"de6df5502eb60d3c5a3029397602c3cb166a4556da3e21dab01ba3feb34acee2"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.995958 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" event={"ID":"dbdeec10-9456-46f3-a08b-6fe084f5865e","Type":"ContainerStarted","Data":"21ff484b4edf6922118dbaac0f821eca21a9bbf633446aa411ee17d4b7a2ea80"} Feb 17 16:05:34 crc kubenswrapper[4874]: I0217 16:05:34.996008 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" event={"ID":"dbdeec10-9456-46f3-a08b-6fe084f5865e","Type":"ContainerStarted","Data":"de18b5eb971151013d824db9841766dc221568f19d4b7ffc6a0dffbde7fe8d03"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.014266 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" event={"ID":"59309c0f-86d9-4425-8752-5e57fbbf9827","Type":"ContainerStarted","Data":"a79f342f61b97263aa8c22fc73718be0944ca78de065583c91091095e1df790c"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.014311 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" event={"ID":"59309c0f-86d9-4425-8752-5e57fbbf9827","Type":"ContainerStarted","Data":"7855aabd00107113c9b6fddf91f0f6af37f937453ce4ecd3b197f3e09eed5def"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.014763 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.016914 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.516893199 +0000 UTC m=+145.811281760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.045962 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dpvgb" podStartSLOduration=125.045945445 podStartE2EDuration="2m5.045945445s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.045401921 +0000 UTC m=+145.339790482" watchObservedRunningTime="2026-02-17 16:05:35.045945445 +0000 UTC m=+145.340334006" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.074306 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pm6wc" event={"ID":"ea9ddc77-8d24-4929-96c7-238e58e40bbe","Type":"ContainerStarted","Data":"4dd52f1915d49b77669c8826d6fb4bd5fb4cd64f6fa8d775391324232f4816fe"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.109756 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" event={"ID":"4f63eb58-f30b-41f4-b569-a7906802fcb4","Type":"ContainerStarted","Data":"3d6b13f71af4207eb271d5bdb0ad869515f7a0260d3fcc10ec140a5e9f81186b"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.119922 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.124213 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.624187646 +0000 UTC m=+145.918576207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.155148 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" event={"ID":"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf","Type":"ContainerStarted","Data":"3b895021000000fe41c58076bc03163be29172da79c3330002291410cbe3ac6a"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.155187 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" event={"ID":"ea652a36-9ddd-4c88-8e96-1f66c3ef0edf","Type":"ContainerStarted","Data":"5d945abacd44e916b9836da30fbf24ceea836f6618f82190002d188103400eb2"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.197783 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" podStartSLOduration=124.197768684 podStartE2EDuration="2m4.197768684s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.157323271 +0000 UTC m=+145.451711832" watchObservedRunningTime="2026-02-17 16:05:35.197768684 +0000 UTC m=+145.492157245" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.202146 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6wpw5" event={"ID":"cfccd2a3-037d-4b17-a269-952847ad533a","Type":"ContainerStarted","Data":"0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.202193 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6wpw5" event={"ID":"cfccd2a3-037d-4b17-a269-952847ad533a","Type":"ContainerStarted","Data":"b712a4af892e6dc88bb6d9dcc810f0834863fb50a9db5bb02a5dc2cb196b5096"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.220910 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.222252 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.722233468 +0000 UTC m=+146.016622029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.228374 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" event={"ID":"70fceb62-f510-491f-a04c-0a2efd5439f7","Type":"ContainerStarted","Data":"76422f461b3e522872ccdc15d086adc4e05a7b7a55f176a9083f2715730e951f"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.228416 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" event={"ID":"70fceb62-f510-491f-a04c-0a2efd5439f7","Type":"ContainerStarted","Data":"3e82815df0001618cb90efa95b38c702ca162d1f682255ca35d6b259d656c0ce"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.242900 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-stl9h" podStartSLOduration=124.24288423 podStartE2EDuration="2m4.24288423s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.201234238 +0000 UTC m=+145.495622799" watchObservedRunningTime="2026-02-17 16:05:35.24288423 +0000 UTC m=+145.537272791" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.251474 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" event={"ID":"45e5b8ae-4eef-4449-b844-574c3b737ad4","Type":"ContainerStarted","Data":"f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.251517 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" event={"ID":"45e5b8ae-4eef-4449-b844-574c3b737ad4","Type":"ContainerStarted","Data":"457765ac7c16758373e820dd05ddad24dce18c202e85bd000d3b668c45e9ec34"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.252529 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.256721 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" event={"ID":"13ed2de5-5f56-4d15-8ded-3e5bd15b511a","Type":"ContainerStarted","Data":"4dad57bc9b9d8aa90fd03f26cf1d358d3a7542f4821e2f41c47317bfb95dfcdd"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.256768 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" event={"ID":"13ed2de5-5f56-4d15-8ded-3e5bd15b511a","Type":"ContainerStarted","Data":"f2cc12b1cd41d58fa7d30dd6701498fa9c91dbaac3e267b951c99ad09b2a4290"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.272657 4874 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2kf8w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.272719 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" podUID="45e5b8ae-4eef-4449-b844-574c3b737ad4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.278423 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6wpw5" podStartSLOduration=125.278410683 podStartE2EDuration="2m5.278410683s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.244787026 +0000 UTC m=+145.539175587" watchObservedRunningTime="2026-02-17 16:05:35.278410683 +0000 UTC m=+145.572799244" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.279643 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" podStartSLOduration=124.279637863 podStartE2EDuration="2m4.279637863s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.277148583 +0000 UTC m=+145.571537144" watchObservedRunningTime="2026-02-17 16:05:35.279637863 +0000 UTC m=+145.574026414" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.297736 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7mw6t" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.301389 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8w5fg" podStartSLOduration=124.301375331 podStartE2EDuration="2m4.301375331s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.299798523 +0000 UTC m=+145.594187084" watchObservedRunningTime="2026-02-17 16:05:35.301375331 +0000 UTC m=+145.595763892" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.322677 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.324669 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.824654587 +0000 UTC m=+146.119043148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.343092 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" event={"ID":"5a72263e-c92a-4d11-9751-aa4240676a0e","Type":"ContainerStarted","Data":"fd66830fb42800d770d7b0d4f90c6c1902779daa599356921a4be53452d6f9fc"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.343132 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" event={"ID":"5a72263e-c92a-4d11-9751-aa4240676a0e","Type":"ContainerStarted","Data":"0f3c95fda8778104e0c4e6e08c8bb77ba718efa1090ee675f176fb250a7f4626"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.410914 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" event={"ID":"639cdaa5-0dc8-4709-80c7-37d8c71e6eda","Type":"ContainerStarted","Data":"9a8269275c1c96613271fc6762a1d0ce35094a714ad6e48255ded6d593b1e4bb"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.411160 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" event={"ID":"639cdaa5-0dc8-4709-80c7-37d8c71e6eda","Type":"ContainerStarted","Data":"1a93a34586121c3f1f4f5cacb839283a09778c94447ed8c260cedb27e54eaf76"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.424225 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.424333 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.924312369 +0000 UTC m=+146.218700930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.424690 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.425024 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.925016806 +0000 UTC m=+146.219405367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.434787 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s464s" event={"ID":"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a","Type":"ContainerStarted","Data":"f2fac1ab1b4675c9ae9dc5278b509e481dfa8fae874346e3f2baf8dc22d975d7"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.443039 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vn6v2" podStartSLOduration=124.443025223 podStartE2EDuration="2m4.443025223s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.438255827 +0000 UTC m=+145.732644388" watchObservedRunningTime="2026-02-17 16:05:35.443025223 +0000 UTC m=+145.737413784" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.455410 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" event={"ID":"5863402f-d384-4df7-96b5-a3ae67599f4c","Type":"ContainerStarted","Data":"4531ebd1b29712d0649fe003f50d033b0115b23ad5ae64bf5bc2ee6fde2699da"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.492420 4874 generic.go:334] "Generic (PLEG): container finished" podID="9d6e7ed7-868d-4e75-9d22-7f38d441aadf" containerID="3e8504b78105f0a7cd5a92d17384c474ef2d89b61aafba680998c0c3a5b4b109" exitCode=0 Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.492481 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" event={"ID":"9d6e7ed7-868d-4e75-9d22-7f38d441aadf","Type":"ContainerDied","Data":"3e8504b78105f0a7cd5a92d17384c474ef2d89b61aafba680998c0c3a5b4b109"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.528350 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.528742 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.028717386 +0000 UTC m=+146.323105947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.531699 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-fchf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.531754 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fchf8" podUID="43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.531888 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" event={"ID":"6c21c3a4-9603-4cd0-a5e3-263aa51d678d","Type":"ContainerStarted","Data":"6a23b3ab0783415a98358155cec6285294e8fc21608480b693895b2eab18a251"} Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.531920 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.550457 4874 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2w9mt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.550699 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" podUID="6c21c3a4-9603-4cd0-a5e3-263aa51d678d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.558419 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.630888 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.631389 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hkn4z" podStartSLOduration=124.63135704 podStartE2EDuration="2m4.63135704s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.512650235 +0000 UTC m=+145.807038796" watchObservedRunningTime="2026-02-17 16:05:35.63135704 +0000 UTC m=+145.925745601" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.633673 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" podStartSLOduration=124.633664286 podStartE2EDuration="2m4.633664286s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.632343074 +0000 UTC m=+145.926731635" watchObservedRunningTime="2026-02-17 16:05:35.633664286 +0000 UTC m=+145.928052847" Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.645828 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.145811691 +0000 UTC m=+146.440200252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.679691 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:35 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:35 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:35 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.680018 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.732133 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.732464 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.232446926 +0000 UTC m=+146.526835487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.833654 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.834246 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.334234719 +0000 UTC m=+146.628623280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.845155 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" podStartSLOduration=124.845140804 podStartE2EDuration="2m4.845140804s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.803168745 +0000 UTC m=+146.097557316" watchObservedRunningTime="2026-02-17 16:05:35.845140804 +0000 UTC m=+146.139529375" Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.935740 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.935977 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.435957131 +0000 UTC m=+146.730345692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4874]: I0217 16:05:35.936353 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:35 crc kubenswrapper[4874]: E0217 16:05:35.936718 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.436706069 +0000 UTC m=+146.731094630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.037499 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.037932 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.537912969 +0000 UTC m=+146.832301530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.139264 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.139598 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.639585709 +0000 UTC m=+146.933974270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.240136 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.240496 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.740480891 +0000 UTC m=+147.034869442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.341719 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.342136 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.842119341 +0000 UTC m=+147.136507902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.443282 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.443450 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.943424142 +0000 UTC m=+147.237812703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.443842 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.444150 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.94413737 +0000 UTC m=+147.238525921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.538493 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" event={"ID":"f81c1252-cad4-4b23-8b84-c5385c96641c","Type":"ContainerStarted","Data":"6ed387ca8caf4e594a00a710e9a6648189bf72178ebe959df24f806b5fff75ba"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.541764 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" event={"ID":"38151ea5-4428-4a24-95ce-a02e586a83ce","Type":"ContainerStarted","Data":"fb9f70201565ef991fd559a056fdc7d75e96d6ad9259c8c056d813f82e7801b1"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.545111 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.545562 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.045543454 +0000 UTC m=+147.339932015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.552266 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" event={"ID":"24203bde-9d97-4574-a15e-56bd86395bf4","Type":"ContainerStarted","Data":"c3daf9c417cf9f7ffb5ca9cefc0d905d22fb1b11ca90e8eebcb9913fe0e0b111"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.566244 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" event={"ID":"3b2a3365-4901-45b8-b528-0961dad4cf66","Type":"ContainerStarted","Data":"7dbb1cdd0c6aed40daa7f6d829bcfa1c8c3e7d91e4b800c7ec7cad4b2e12ece2"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.577197 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" event={"ID":"f43ef484-ca5b-4c21-8959-d79471c4b21d","Type":"ContainerStarted","Data":"7acd1bef7bf2115e8a57bab32bd50a4c09130d22384809d7a8d14a15314f92e1"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.587668 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2gktq" podStartSLOduration=126.587654537 podStartE2EDuration="2m6.587654537s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.58572772 +0000 UTC m=+146.880116281" watchObservedRunningTime="2026-02-17 16:05:36.587654537 +0000 UTC m=+146.882043098" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.590687 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" event={"ID":"9d6e7ed7-868d-4e75-9d22-7f38d441aadf","Type":"ContainerStarted","Data":"561a4a9e2c1b6ccea028383b7a85dccedd680bf35d183d9ba13f59d959d798ab"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.594323 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" event={"ID":"58504546-67ad-4e0d-88ea-53fcf0684659","Type":"ContainerStarted","Data":"7e01fd8a62b7fbbdd498075c649f4611c1f36c614a9d7cabe3b5ee7cc2cbb419"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.597798 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" event={"ID":"1d79a8a2-fadf-4c52-b67b-3091a20cace5","Type":"ContainerStarted","Data":"762a7cf77347f312965c13b337d07a60cecdf2cb0fc5f75bf3b61f0827dbfae3"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.597873 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" event={"ID":"1d79a8a2-fadf-4c52-b67b-3091a20cace5","Type":"ContainerStarted","Data":"c0b09ca23898a17cc890f89c487acb0cd37b00b935de23ff27b1614a8fd88771"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.598176 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.604037 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k7w57" event={"ID":"4f63eb58-f30b-41f4-b569-a7906802fcb4","Type":"ContainerStarted","Data":"41345d015eebad4a2968c93ee680f9b5c1302337c42586e5887ce8651d90f6d8"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.610333 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" event={"ID":"168d1b1d-27b6-4e4e-82b4-546836063edd","Type":"ContainerStarted","Data":"e45ca6dec47e52b3b19dda2ee3a49125727c8bdf307892a7bcc1d7bd666f377e"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.610446 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" event={"ID":"168d1b1d-27b6-4e4e-82b4-546836063edd","Type":"ContainerStarted","Data":"ad35058a820173e48def235639187fb5ca806514a1486288fe8fb89b767a61ab"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.613632 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84mb2" event={"ID":"5863402f-d384-4df7-96b5-a3ae67599f4c","Type":"ContainerStarted","Data":"4fa6fd6c719b3eb7ad8a75bd19c82e9663d87b9e572836da25a52ec9563291cd"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.617785 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" event={"ID":"ea2cbe06-9c98-4418-9122-a98dbae2460d","Type":"ContainerStarted","Data":"58c4fdb21bfb6759bc7a23aa5569f36eb0652346cafe13110a403b37d823f354"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.622357 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pm6wc" event={"ID":"ea9ddc77-8d24-4929-96c7-238e58e40bbe","Type":"ContainerStarted","Data":"7d73dc946c0d75ae4093e715ef7eef971d99bca2f097fd14d4e1d9ff42c2f28e"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.624283 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-s464s" event={"ID":"f6c5ab25-b40a-4e91-b4e6-811ec8093a2a","Type":"ContainerStarted","Data":"0bab284bd0052207ae0a9ac6737d37a174758b370130d9ed3b1ce29baa1c6490"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.625594 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnm5s" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.631370 4874 generic.go:334] "Generic (PLEG): container finished" podID="6700af5a-0927-417d-a623-e5bf764df51b" containerID="ec50fc5503a979da7ea9a0095545afe11dfae6d42fd4d52991e1e54470137d04" exitCode=0 Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.631434 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" event={"ID":"6700af5a-0927-417d-a623-e5bf764df51b","Type":"ContainerDied","Data":"ec50fc5503a979da7ea9a0095545afe11dfae6d42fd4d52991e1e54470137d04"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.631453 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" event={"ID":"6700af5a-0927-417d-a623-e5bf764df51b","Type":"ContainerStarted","Data":"e88ed85b44cf9274deb9dc0618887393d1e58afca2e02f1d0c8b646105d5b8b7"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.631928 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.635654 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jt8g9" event={"ID":"600c5b21-a46e-4644-8f1d-55fa0b4d06dd","Type":"ContainerStarted","Data":"e73b05e82e62505a25d4006ab30bf4581edc333f06e58704df923c6e7f11ac50"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.635709 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jt8g9" event={"ID":"600c5b21-a46e-4644-8f1d-55fa0b4d06dd","Type":"ContainerStarted","Data":"327406a13114e748d4037289cba5e483699a8a8a00d1197628b36d7abbf4a796"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.636278 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.644567 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" podStartSLOduration=126.644539549 podStartE2EDuration="2m6.644539549s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.634032774 +0000 UTC m=+146.928421355" watchObservedRunningTime="2026-02-17 16:05:36.644539549 +0000 UTC m=+146.938928120" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.646268 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.646304 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" event={"ID":"6c21c3a4-9603-4cd0-a5e3-263aa51d678d","Type":"ContainerStarted","Data":"422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.647245 4874 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2w9mt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.647281 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" podUID="6c21c3a4-9603-4cd0-a5e3-263aa51d678d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.648070 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.148054055 +0000 UTC m=+147.442442616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.675192 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:36 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:36 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:36 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.675250 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.681879 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ggrcz" podStartSLOduration=125.681843516 podStartE2EDuration="2m5.681843516s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.675743188 +0000 UTC m=+146.970131749" watchObservedRunningTime="2026-02-17 16:05:36.681843516 +0000 UTC m=+146.976232077" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.689596 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn" event={"ID":"d985a553-61af-46e7-a559-16dd4629929c","Type":"ContainerStarted","Data":"d8e3086062b723d6fa14841d6935449a09bb53ce35706af69451abd6f196c4d9"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.721978 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" event={"ID":"dbdeec10-9456-46f3-a08b-6fe084f5865e","Type":"ContainerStarted","Data":"da92eca58a684c98d45141565e54c8840a9f13ae613b2ea3e4e23d0572ab24ef"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.740034 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pptdb" podStartSLOduration=125.740018809 podStartE2EDuration="2m5.740018809s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.710326208 +0000 UTC m=+147.004714769" watchObservedRunningTime="2026-02-17 16:05:36.740018809 +0000 UTC m=+147.034407370" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.741469 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" event={"ID":"70fceb62-f510-491f-a04c-0a2efd5439f7","Type":"ContainerStarted","Data":"ac5e704eb35ed70ddee1056db9e75029ae72e7c54728fbaa751cdbe5f6555bd2"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.759303 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.760529 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.260514528 +0000 UTC m=+147.554903089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.767835 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-fchf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.767881 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fchf8" podUID="43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.769100 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.769170 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" event={"ID":"435efe4c-197a-43a2-9033-8dc57e98c006","Type":"ContainerStarted","Data":"e82e8ae2ecdd019f114cf5a61206364eb1ee999294ab354a17f2d633f9da7235"} Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.769910 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.792170 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pm6wc" podStartSLOduration=7.792149256 podStartE2EDuration="7.792149256s" podCreationTimestamp="2026-02-17 16:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.780941364 +0000 UTC m=+147.075329925" watchObservedRunningTime="2026-02-17 16:05:36.792149256 +0000 UTC m=+147.086537827" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.792661 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6g2fs" podStartSLOduration=125.792652628 podStartE2EDuration="2m5.792652628s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.740677786 +0000 UTC m=+147.035066347" watchObservedRunningTime="2026-02-17 16:05:36.792652628 +0000 UTC m=+147.087041189" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.792199 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.794751 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.796410 4874 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-579zx container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.796486 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" podUID="9d6e7ed7-868d-4e75-9d22-7f38d441aadf" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.810791 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hk8nz" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.811440 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" podStartSLOduration=126.811424835 podStartE2EDuration="2m6.811424835s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.811045375 +0000 UTC m=+147.105433936" watchObservedRunningTime="2026-02-17 16:05:36.811424835 +0000 UTC m=+147.105813396" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.863351 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.869832 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.369811593 +0000 UTC m=+147.664200154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.959725 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-s464s" podStartSLOduration=125.959694748 podStartE2EDuration="2m5.959694748s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.959418291 +0000 UTC m=+147.253806852" watchObservedRunningTime="2026-02-17 16:05:36.959694748 +0000 UTC m=+147.254083309" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.978536 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.980421 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" podStartSLOduration=125.980411301 podStartE2EDuration="2m5.980411301s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.893057598 +0000 UTC m=+147.187446159" watchObservedRunningTime="2026-02-17 16:05:36.980411301 +0000 UTC m=+147.274799862" Feb 17 16:05:36 crc kubenswrapper[4874]: I0217 16:05:36.982138 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4874]: E0217 16:05:36.996193 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.496151873 +0000 UTC m=+147.790540434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.083851 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.094543 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.584146992 +0000 UTC m=+147.878535563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.129181 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jt8g9" podStartSLOduration=8.129166096 podStartE2EDuration="8.129166096s" podCreationTimestamp="2026-02-17 16:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.055146047 +0000 UTC m=+147.349534608" watchObservedRunningTime="2026-02-17 16:05:37.129166096 +0000 UTC m=+147.423554657" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.157301 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" podStartSLOduration=126.157286008 podStartE2EDuration="2m6.157286008s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.131195285 +0000 UTC m=+147.425583836" watchObservedRunningTime="2026-02-17 16:05:37.157286008 +0000 UTC m=+147.451674569" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.186512 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.186631 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.68660737 +0000 UTC m=+147.980995931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.186763 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.187129 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.687114243 +0000 UTC m=+147.981502804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.207768 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-rxw56" podStartSLOduration=126.207744164 podStartE2EDuration="2m6.207744164s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.206978545 +0000 UTC m=+147.501367106" watchObservedRunningTime="2026-02-17 16:05:37.207744164 +0000 UTC m=+147.502132725" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.287715 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.288045 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.788021925 +0000 UTC m=+148.082410486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.336993 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-64tmb" podStartSLOduration=126.33697634399999 podStartE2EDuration="2m6.336976344s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.271315939 +0000 UTC m=+147.565704500" watchObservedRunningTime="2026-02-17 16:05:37.336976344 +0000 UTC m=+147.631364905" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.338643 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hpbgn" podStartSLOduration=126.338637915 podStartE2EDuration="2m6.338637915s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.335836677 +0000 UTC m=+147.630225238" watchObservedRunningTime="2026-02-17 16:05:37.338637915 +0000 UTC m=+147.633026486" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.388888 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.389300 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.889285005 +0000 UTC m=+148.183673566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.420021 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jjhdq" podStartSLOduration=126.420003302 podStartE2EDuration="2m6.420003302s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.41992471 +0000 UTC m=+147.714313271" watchObservedRunningTime="2026-02-17 16:05:37.420003302 +0000 UTC m=+147.714391863" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.489867 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.490028 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.990001643 +0000 UTC m=+148.284390204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.490306 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.490630 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.990623588 +0000 UTC m=+148.285012149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.555262 4874 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bcj8r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.556187 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" podUID="24203bde-9d97-4574-a15e-56bd86395bf4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.593042 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.593205 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.09318007 +0000 UTC m=+148.387568631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.593398 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.593702 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.093690712 +0000 UTC m=+148.388079273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.671529 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:37 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:37 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:37 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.671590 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.694498 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.694851 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.19483856 +0000 UTC m=+148.489227121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.773279 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" event={"ID":"f43ef484-ca5b-4c21-8959-d79471c4b21d","Type":"ContainerStarted","Data":"b4abb3b441fe6ce4202fb627fd8d007fc298f380710eee308e2e1da9c1683f0b"} Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.774516 4874 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3365-4901-45b8-b528-0961dad4cf66" containerID="7dbb1cdd0c6aed40daa7f6d829bcfa1c8c3e7d91e4b800c7ec7cad4b2e12ece2" exitCode=0 Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.774645 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" event={"ID":"3b2a3365-4901-45b8-b528-0961dad4cf66","Type":"ContainerDied","Data":"7dbb1cdd0c6aed40daa7f6d829bcfa1c8c3e7d91e4b800c7ec7cad4b2e12ece2"} Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.775744 4874 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2w9mt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.775794 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" podUID="6c21c3a4-9603-4cd0-a5e3-263aa51d678d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.795590 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.795930 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.295918076 +0000 UTC m=+148.590306637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.892852 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kdr2g"] Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.893726 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.895837 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.896384 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.896730 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.396714726 +0000 UTC m=+148.691103287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.896912 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:37 crc kubenswrapper[4874]: E0217 16:05:37.897247 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.397239158 +0000 UTC m=+148.691627709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4874]: I0217 16:05:37.926767 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kdr2g"] Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.002926 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.003121 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-catalog-content\") pod \"community-operators-kdr2g\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.003192 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-utilities\") pod \"community-operators-kdr2g\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.003226 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjbz\" (UniqueName: \"kubernetes.io/projected/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-kube-api-access-fxjbz\") pod \"community-operators-kdr2g\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.003323 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.503306606 +0000 UTC m=+148.797695167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.080625 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xtlqz"] Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.081774 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.085899 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.101155 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtlqz"] Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.104146 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjbz\" (UniqueName: \"kubernetes.io/projected/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-kube-api-access-fxjbz\") pod \"community-operators-kdr2g\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.104216 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-catalog-content\") pod \"community-operators-kdr2g\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.104644 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-catalog-content\") pod \"community-operators-kdr2g\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.104284 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.104724 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-utilities\") pod \"community-operators-kdr2g\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.104743 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.60472748 +0000 UTC m=+148.899116041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.104980 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-utilities\") pod \"community-operators-kdr2g\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.152910 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjbz\" (UniqueName: \"kubernetes.io/projected/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-kube-api-access-fxjbz\") pod \"community-operators-kdr2g\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.195504 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bcj8r" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.206268 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.206436 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.706411001 +0000 UTC m=+149.000799552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.206482 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-catalog-content\") pod \"certified-operators-xtlqz\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.206706 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.206739 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-utilities\") pod \"certified-operators-xtlqz\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.206790 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknqt\" (UniqueName: \"kubernetes.io/projected/28be448a-a2cb-4731-85fa-ec01026d5763-kube-api-access-wknqt\") pod \"certified-operators-xtlqz\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.207056 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.707029376 +0000 UTC m=+149.001417927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.214093 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.308225 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.308522 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-utilities\") pod \"certified-operators-xtlqz\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.308572 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wknqt\" (UniqueName: \"kubernetes.io/projected/28be448a-a2cb-4731-85fa-ec01026d5763-kube-api-access-wknqt\") pod \"certified-operators-xtlqz\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.308610 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-catalog-content\") pod \"certified-operators-xtlqz\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.309058 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-catalog-content\") pod \"certified-operators-xtlqz\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.309151 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.809134817 +0000 UTC m=+149.103523368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.309219 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-utilities\") pod \"certified-operators-xtlqz\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.357275 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jvjxj"] Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.358382 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.360955 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknqt\" (UniqueName: \"kubernetes.io/projected/28be448a-a2cb-4731-85fa-ec01026d5763-kube-api-access-wknqt\") pod \"certified-operators-xtlqz\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.373155 4874 patch_prober.go:28] interesting pod/apiserver-76f77b778f-s464s container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]log ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]etcd ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/max-in-flight-filter ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 17 16:05:38 crc kubenswrapper[4874]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 17 16:05:38 crc kubenswrapper[4874]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/project.openshift.io-projectcache ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-startinformers ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 17 16:05:38 crc kubenswrapper[4874]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 16:05:38 crc kubenswrapper[4874]: livez check failed Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.373222 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-s464s" podUID="f6c5ab25-b40a-4e91-b4e6-811ec8093a2a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.397384 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.413712 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.413757 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.413806 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.413838 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-catalog-content\") pod \"community-operators-jvjxj\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.413857 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.413878 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-utilities\") pod \"community-operators-jvjxj\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.413911 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4z7\" (UniqueName: \"kubernetes.io/projected/777c2139-1b69-4526-b1a0-537c84c3fc02-kube-api-access-jp4z7\") pod \"community-operators-jvjxj\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.414211 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.91420009 +0000 UTC m=+149.208588651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.415593 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.419713 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.424810 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.431563 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jvjxj"] Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.511437 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7h6dq"] Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.512315 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.517328 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.517633 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp4z7\" (UniqueName: \"kubernetes.io/projected/777c2139-1b69-4526-b1a0-537c84c3fc02-kube-api-access-jp4z7\") pod \"community-operators-jvjxj\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.517736 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.517760 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-catalog-content\") pod \"community-operators-jvjxj\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.517818 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.017783137 +0000 UTC m=+149.312171698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.517844 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-utilities\") pod \"community-operators-jvjxj\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.518246 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-catalog-content\") pod \"community-operators-jvjxj\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.518275 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-utilities\") pod \"community-operators-jvjxj\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.541851 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.545019 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7h6dq"] Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.547228 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp4z7\" (UniqueName: \"kubernetes.io/projected/777c2139-1b69-4526-b1a0-537c84c3fc02-kube-api-access-jp4z7\") pod \"community-operators-jvjxj\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.595848 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.602626 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.618859 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.618917 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-catalog-content\") pod \"certified-operators-7h6dq\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.618950 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv4xd\" (UniqueName: \"kubernetes.io/projected/73d464fc-2d1d-4a29-ae06-5d29503f6545-kube-api-access-jv4xd\") pod \"certified-operators-7h6dq\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.618995 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-utilities\") pod \"certified-operators-7h6dq\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.619274 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.119262153 +0000 UTC m=+149.413650714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.621370 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.622359 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vxvz6" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.692890 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.706405 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:38 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:38 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:38 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.706462 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.721216 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.721446 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-catalog-content\") pod \"certified-operators-7h6dq\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.721485 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv4xd\" (UniqueName: \"kubernetes.io/projected/73d464fc-2d1d-4a29-ae06-5d29503f6545-kube-api-access-jv4xd\") pod \"certified-operators-7h6dq\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.721540 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.221503567 +0000 UTC m=+149.515892128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.721642 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-utilities\") pod \"certified-operators-7h6dq\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.721719 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.722140 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-catalog-content\") pod \"certified-operators-7h6dq\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.722151 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.222128873 +0000 UTC m=+149.516517434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.722513 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-utilities\") pod \"certified-operators-7h6dq\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.744602 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv4xd\" (UniqueName: \"kubernetes.io/projected/73d464fc-2d1d-4a29-ae06-5d29503f6545-kube-api-access-jv4xd\") pod \"certified-operators-7h6dq\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.781850 4874 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.804022 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" event={"ID":"f43ef484-ca5b-4c21-8959-d79471c4b21d","Type":"ContainerStarted","Data":"361bf988dd93e0f6494a8dcd8015e26ef6f9fc07c84375f9b3f085ff7994ebe7"} Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.805017 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" event={"ID":"f43ef484-ca5b-4c21-8959-d79471c4b21d","Type":"ContainerStarted","Data":"c57336ab0fe4997442971e53cf005f0834e73bdc580af8ee9b5ab60c9dd9e116"} Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.822262 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kdr2g"] Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.822692 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.822852 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-n8bpc" podStartSLOduration=9.82282484 podStartE2EDuration="9.82282484s" podCreationTimestamp="2026-02-17 16:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.822226555 +0000 UTC m=+149.116615126" watchObservedRunningTime="2026-02-17 16:05:38.82282484 +0000 UTC m=+149.117213401" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.823315 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.323293081 +0000 UTC m=+149.617681642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.904111 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:05:38 crc kubenswrapper[4874]: I0217 16:05:38.931278 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:38 crc kubenswrapper[4874]: E0217 16:05:38.931548 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.431535321 +0000 UTC m=+149.725923882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.032057 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:39 crc kubenswrapper[4874]: E0217 16:05:39.032415 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.532385632 +0000 UTC m=+149.826774193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.032569 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:39 crc kubenswrapper[4874]: E0217 16:05:39.032858 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.532851233 +0000 UTC m=+149.827239794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l5nms" (UID: "bbe005ea-f697-473a-8578-91453c7a8331") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.119411 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtlqz"] Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.134442 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:39 crc kubenswrapper[4874]: E0217 16:05:39.134725 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.634710568 +0000 UTC m=+149.929099129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4874]: W0217 16:05:39.148463 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28be448a_a2cb_4731_85fa_ec01026d5763.slice/crio-2eacc1d7c9e3d11a678218e7067e1cda64da8dc57155b32c97e9c4b2992d6451 WatchSource:0}: Error finding container 2eacc1d7c9e3d11a678218e7067e1cda64da8dc57155b32c97e9c4b2992d6451: Status 404 returned error can't find the container with id 2eacc1d7c9e3d11a678218e7067e1cda64da8dc57155b32c97e9c4b2992d6451 Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.171201 4874 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T16:05:38.781884145Z","Handler":null,"Name":""} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.176277 4874 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.176301 4874 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.184508 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:39 crc kubenswrapper[4874]: W0217 16:05:39.185864 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-20cac9fa1ca5bf4e2ec81863a16516a4f2617ad4f1d115fadb61e2aab403ae67 WatchSource:0}: Error finding container 20cac9fa1ca5bf4e2ec81863a16516a4f2617ad4f1d115fadb61e2aab403ae67: Status 404 returned error can't find the container with id 20cac9fa1ca5bf4e2ec81863a16516a4f2617ad4f1d115fadb61e2aab403ae67 Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.235533 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfnp6\" (UniqueName: \"kubernetes.io/projected/3b2a3365-4901-45b8-b528-0961dad4cf66-kube-api-access-qfnp6\") pod \"3b2a3365-4901-45b8-b528-0961dad4cf66\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.243287 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2a3365-4901-45b8-b528-0961dad4cf66-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b2a3365-4901-45b8-b528-0961dad4cf66" (UID: "3b2a3365-4901-45b8-b528-0961dad4cf66"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.243564 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2a3365-4901-45b8-b528-0961dad4cf66-kube-api-access-qfnp6" (OuterVolumeSpecName: "kube-api-access-qfnp6") pod "3b2a3365-4901-45b8-b528-0961dad4cf66" (UID: "3b2a3365-4901-45b8-b528-0961dad4cf66"). InnerVolumeSpecName "kube-api-access-qfnp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.244526 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2a3365-4901-45b8-b528-0961dad4cf66-secret-volume\") pod \"3b2a3365-4901-45b8-b528-0961dad4cf66\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.244593 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2a3365-4901-45b8-b528-0961dad4cf66-config-volume\") pod \"3b2a3365-4901-45b8-b528-0961dad4cf66\" (UID: \"3b2a3365-4901-45b8-b528-0961dad4cf66\") " Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.244841 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.244982 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfnp6\" (UniqueName: \"kubernetes.io/projected/3b2a3365-4901-45b8-b528-0961dad4cf66-kube-api-access-qfnp6\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.244998 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2a3365-4901-45b8-b528-0961dad4cf66-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.245438 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2a3365-4901-45b8-b528-0961dad4cf66-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b2a3365-4901-45b8-b528-0961dad4cf66" (UID: "3b2a3365-4901-45b8-b528-0961dad4cf66"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.248856 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.248897 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.285055 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l5nms\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.324552 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jvjxj"] Feb 17 16:05:39 crc kubenswrapper[4874]: W0217 16:05:39.344830 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod777c2139_1b69_4526_b1a0_537c84c3fc02.slice/crio-7136024a37ab3f958f3e406d18dc728b073d29887f5822493d381c5673e2efa0 WatchSource:0}: Error finding container 7136024a37ab3f958f3e406d18dc728b073d29887f5822493d381c5673e2efa0: Status 404 returned error can't find the container with id 7136024a37ab3f958f3e406d18dc728b073d29887f5822493d381c5673e2efa0 Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.345551 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.345868 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2a3365-4901-45b8-b528-0961dad4cf66-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.361246 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.444769 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7h6dq"] Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.525182 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.664957 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:39 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:39 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:39 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.665260 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.722691 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l5nms"] Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.807704 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" event={"ID":"3b2a3365-4901-45b8-b528-0961dad4cf66","Type":"ContainerDied","Data":"1963d2f9926ce218055f4f121929fc8ae9315ee8ff98913d437bb3916ba80307"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.807741 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1963d2f9926ce218055f4f121929fc8ae9315ee8ff98913d437bb3916ba80307" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.807797 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.810827 4874 generic.go:334] "Generic (PLEG): container finished" podID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerID="0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e" exitCode=0 Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.810903 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7h6dq" event={"ID":"73d464fc-2d1d-4a29-ae06-5d29503f6545","Type":"ContainerDied","Data":"0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.810932 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7h6dq" event={"ID":"73d464fc-2d1d-4a29-ae06-5d29503f6545","Type":"ContainerStarted","Data":"988d02b1a42063f8924ca64cc1a84e55eae3b3301f6a29d4a5611521cc37ea09"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.812899 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.814825 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" event={"ID":"bbe005ea-f697-473a-8578-91453c7a8331","Type":"ContainerStarted","Data":"16e8c070d81d29de4a961b9aecf071ac8819bbb9b53739a682a65f15828295b4"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.819592 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6ee3fd4866a9ad7c767545bd114ba327aa0dab0e7901027943af9ec1c81277a7"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.819637 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"20cac9fa1ca5bf4e2ec81863a16516a4f2617ad4f1d115fadb61e2aab403ae67"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.819952 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.822020 4874 generic.go:334] "Generic (PLEG): container finished" podID="28be448a-a2cb-4731-85fa-ec01026d5763" containerID="c9603ba3e4a2c6990d1af368653c0ee89e0587c608b7e93821b53856fd4cf9e3" exitCode=0 Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.822064 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtlqz" event={"ID":"28be448a-a2cb-4731-85fa-ec01026d5763","Type":"ContainerDied","Data":"c9603ba3e4a2c6990d1af368653c0ee89e0587c608b7e93821b53856fd4cf9e3"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.822099 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtlqz" event={"ID":"28be448a-a2cb-4731-85fa-ec01026d5763","Type":"ContainerStarted","Data":"2eacc1d7c9e3d11a678218e7067e1cda64da8dc57155b32c97e9c4b2992d6451"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.824009 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7bd28b829054a5a10fcda7afa3fb42aa360967f85293c73af476f5c53bb8e1dc"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.824031 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"23f080351088d24d9bd3c09faad090bae39021375a9f55d797569fafa97cca2d"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.826437 4874 generic.go:334] "Generic (PLEG): container finished" podID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerID="95eddb84372b96491cc79fa7ed4a4c5b76cb2d583e0f0f833d75af2b42731959" exitCode=0 Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.826521 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvjxj" event={"ID":"777c2139-1b69-4526-b1a0-537c84c3fc02","Type":"ContainerDied","Data":"95eddb84372b96491cc79fa7ed4a4c5b76cb2d583e0f0f833d75af2b42731959"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.826562 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvjxj" event={"ID":"777c2139-1b69-4526-b1a0-537c84c3fc02","Type":"ContainerStarted","Data":"7136024a37ab3f958f3e406d18dc728b073d29887f5822493d381c5673e2efa0"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.834593 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f46f50837ed5abe32d2653d7f6120813dfe310f93d2aa5cc16e54639ba43b5f9"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.834631 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b157d576a75c8e4df583852b305c61bd3864ed24a7bc039ac2046c00c02f804b"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.836726 4874 generic.go:334] "Generic (PLEG): container finished" podID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerID="f64930d4beeef1982ac81c1558c1cde325beeaf272a4b69bdc2e49072b553bbf" exitCode=0 Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.837525 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdr2g" event={"ID":"52637c6d-7cd3-4761-b70d-4e07e68a6c5d","Type":"ContainerDied","Data":"f64930d4beeef1982ac81c1558c1cde325beeaf272a4b69bdc2e49072b553bbf"} Feb 17 16:05:39 crc kubenswrapper[4874]: I0217 16:05:39.837549 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdr2g" event={"ID":"52637c6d-7cd3-4761-b70d-4e07e68a6c5d","Type":"ContainerStarted","Data":"2c23c35d40e4242af8cea880ea1b0e3a6af8722d659934702e9a0deb157c233d"} Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.076507 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fn8"] Feb 17 16:05:40 crc kubenswrapper[4874]: E0217 16:05:40.077000 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2a3365-4901-45b8-b528-0961dad4cf66" containerName="collect-profiles" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.077013 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2a3365-4901-45b8-b528-0961dad4cf66" containerName="collect-profiles" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.077110 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2a3365-4901-45b8-b528-0961dad4cf66" containerName="collect-profiles" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.077773 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.079806 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.094339 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fn8"] Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.158316 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgkq2\" (UniqueName: \"kubernetes.io/projected/cfc01af4-cec4-4d66-b673-ac10e1797059-kube-api-access-rgkq2\") pod \"redhat-marketplace-v8fn8\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.158662 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-catalog-content\") pod \"redhat-marketplace-v8fn8\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.158833 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-utilities\") pod \"redhat-marketplace-v8fn8\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.260157 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-utilities\") pod \"redhat-marketplace-v8fn8\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.260448 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgkq2\" (UniqueName: \"kubernetes.io/projected/cfc01af4-cec4-4d66-b673-ac10e1797059-kube-api-access-rgkq2\") pod \"redhat-marketplace-v8fn8\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.260957 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-catalog-content\") pod \"redhat-marketplace-v8fn8\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.260723 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-utilities\") pod \"redhat-marketplace-v8fn8\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.261470 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-catalog-content\") pod \"redhat-marketplace-v8fn8\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.286186 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgkq2\" (UniqueName: \"kubernetes.io/projected/cfc01af4-cec4-4d66-b673-ac10e1797059-kube-api-access-rgkq2\") pod \"redhat-marketplace-v8fn8\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.388858 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.392745 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.393456 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.395269 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.401444 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.405575 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.463652 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.463696 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90813fb-4837-4807-83a2-c59e59532597-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a90813fb-4837-4807-83a2-c59e59532597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.463766 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90813fb-4837-4807-83a2-c59e59532597-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a90813fb-4837-4807-83a2-c59e59532597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.480418 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sth6c"] Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.482792 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.492055 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sth6c"] Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.566090 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-utilities\") pod \"redhat-marketplace-sth6c\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.566368 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90813fb-4837-4807-83a2-c59e59532597-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a90813fb-4837-4807-83a2-c59e59532597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.566439 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-catalog-content\") pod \"redhat-marketplace-sth6c\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.566639 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znn86\" (UniqueName: \"kubernetes.io/projected/52ea909f-1a30-4a49-9b48-d6a6135a4598-kube-api-access-znn86\") pod \"redhat-marketplace-sth6c\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.566657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90813fb-4837-4807-83a2-c59e59532597-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a90813fb-4837-4807-83a2-c59e59532597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.566716 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90813fb-4837-4807-83a2-c59e59532597-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a90813fb-4837-4807-83a2-c59e59532597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.583757 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90813fb-4837-4807-83a2-c59e59532597-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a90813fb-4837-4807-83a2-c59e59532597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.665431 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:40 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:40 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:40 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.665525 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.669619 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-utilities\") pod \"redhat-marketplace-sth6c\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.669783 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-catalog-content\") pod \"redhat-marketplace-sth6c\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.669807 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znn86\" (UniqueName: \"kubernetes.io/projected/52ea909f-1a30-4a49-9b48-d6a6135a4598-kube-api-access-znn86\") pod \"redhat-marketplace-sth6c\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.671163 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-utilities\") pod \"redhat-marketplace-sth6c\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.673449 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-catalog-content\") pod \"redhat-marketplace-sth6c\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.693831 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znn86\" (UniqueName: \"kubernetes.io/projected/52ea909f-1a30-4a49-9b48-d6a6135a4598-kube-api-access-znn86\") pod \"redhat-marketplace-sth6c\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.743893 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.831028 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.865858 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" event={"ID":"bbe005ea-f697-473a-8578-91453c7a8331","Type":"ContainerStarted","Data":"a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1"} Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.866025 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.898908 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" podStartSLOduration=130.898890626 podStartE2EDuration="2m10.898890626s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:40.893874254 +0000 UTC m=+151.188262815" watchObservedRunningTime="2026-02-17 16:05:40.898890626 +0000 UTC m=+151.193279187" Feb 17 16:05:40 crc kubenswrapper[4874]: I0217 16:05:40.899997 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fn8"] Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.098168 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jqfd6"] Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.099639 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.101821 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.119058 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jqfd6"] Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.130395 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.203148 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-utilities\") pod \"redhat-operators-jqfd6\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.203211 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-catalog-content\") pod \"redhat-operators-jqfd6\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.203372 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8glj\" (UniqueName: \"kubernetes.io/projected/19397da4-8b1f-4ec8-969c-2856e64112fc-kube-api-access-z8glj\") pod \"redhat-operators-jqfd6\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: W0217 16:05:41.227966 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda90813fb_4837_4807_83a2_c59e59532597.slice/crio-f7b821eef39a1284f53c7d63b939cffe9032da2826a2a1f1d1b0ff43263e5da8 WatchSource:0}: Error finding container f7b821eef39a1284f53c7d63b939cffe9032da2826a2a1f1d1b0ff43263e5da8: Status 404 returned error can't find the container with id f7b821eef39a1284f53c7d63b939cffe9032da2826a2a1f1d1b0ff43263e5da8 Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.305437 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8glj\" (UniqueName: \"kubernetes.io/projected/19397da4-8b1f-4ec8-969c-2856e64112fc-kube-api-access-z8glj\") pod \"redhat-operators-jqfd6\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.305839 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-utilities\") pod \"redhat-operators-jqfd6\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.305897 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-catalog-content\") pod \"redhat-operators-jqfd6\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.306341 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-utilities\") pod \"redhat-operators-jqfd6\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.306361 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-catalog-content\") pod \"redhat-operators-jqfd6\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.324762 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8glj\" (UniqueName: \"kubernetes.io/projected/19397da4-8b1f-4ec8-969c-2856e64112fc-kube-api-access-z8glj\") pod \"redhat-operators-jqfd6\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.375470 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sth6c"] Feb 17 16:05:41 crc kubenswrapper[4874]: W0217 16:05:41.387458 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52ea909f_1a30_4a49_9b48_d6a6135a4598.slice/crio-3948dd621b20f6b5f483650b7f990cf99a360e0be4182fe92fd892d4e8fbfa21 WatchSource:0}: Error finding container 3948dd621b20f6b5f483650b7f990cf99a360e0be4182fe92fd892d4e8fbfa21: Status 404 returned error can't find the container with id 3948dd621b20f6b5f483650b7f990cf99a360e0be4182fe92fd892d4e8fbfa21 Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.450136 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.478737 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bgnxq"] Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.479825 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.486570 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgnxq"] Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.615469 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-catalog-content\") pod \"redhat-operators-bgnxq\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.615744 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-utilities\") pod \"redhat-operators-bgnxq\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.615760 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgdq\" (UniqueName: \"kubernetes.io/projected/065c00cb-7ec7-428e-a10a-aaf6335d63e1-kube-api-access-7zgdq\") pod \"redhat-operators-bgnxq\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.653191 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jqfd6"] Feb 17 16:05:41 crc kubenswrapper[4874]: W0217 16:05:41.660426 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19397da4_8b1f_4ec8_969c_2856e64112fc.slice/crio-a7a9883ca17a3726d68d3d6125b805d52ff0d5a5c046f96000ec05186d3e0d94 WatchSource:0}: Error finding container a7a9883ca17a3726d68d3d6125b805d52ff0d5a5c046f96000ec05186d3e0d94: Status 404 returned error can't find the container with id a7a9883ca17a3726d68d3d6125b805d52ff0d5a5c046f96000ec05186d3e0d94 Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.664586 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:41 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:41 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:41 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.664622 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.669379 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-fchf8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.669427 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fchf8" podUID="43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.669670 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-fchf8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.669693 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fchf8" podUID="43b2c0f0-9f47-4e75-ac3c-ee4d1f2e1c81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.717666 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-catalog-content\") pod \"redhat-operators-bgnxq\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.717715 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-utilities\") pod \"redhat-operators-bgnxq\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.717732 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zgdq\" (UniqueName: \"kubernetes.io/projected/065c00cb-7ec7-428e-a10a-aaf6335d63e1-kube-api-access-7zgdq\") pod \"redhat-operators-bgnxq\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.718515 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-catalog-content\") pod \"redhat-operators-bgnxq\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.719913 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-utilities\") pod \"redhat-operators-bgnxq\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.737667 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zgdq\" (UniqueName: \"kubernetes.io/projected/065c00cb-7ec7-428e-a10a-aaf6335d63e1-kube-api-access-7zgdq\") pod \"redhat-operators-bgnxq\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.764892 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.769518 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-s464s" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.784447 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.791858 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-579zx" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.822434 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.906144 4874 generic.go:334] "Generic (PLEG): container finished" podID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerID="9e3e7317a8c0865a88b2978f6d72be1278e0604a1330bb6d994c41fd881517de" exitCode=0 Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.906379 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fn8" event={"ID":"cfc01af4-cec4-4d66-b673-ac10e1797059","Type":"ContainerDied","Data":"9e3e7317a8c0865a88b2978f6d72be1278e0604a1330bb6d994c41fd881517de"} Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.906404 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fn8" event={"ID":"cfc01af4-cec4-4d66-b673-ac10e1797059","Type":"ContainerStarted","Data":"906651edcdef0a0a3de0ee8d2b27872818537fecbe7e864e8e0e95ae201a8a23"} Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.923300 4874 generic.go:334] "Generic (PLEG): container finished" podID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerID="98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325" exitCode=0 Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.923362 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sth6c" event={"ID":"52ea909f-1a30-4a49-9b48-d6a6135a4598","Type":"ContainerDied","Data":"98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325"} Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.923386 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sth6c" event={"ID":"52ea909f-1a30-4a49-9b48-d6a6135a4598","Type":"ContainerStarted","Data":"3948dd621b20f6b5f483650b7f990cf99a360e0be4182fe92fd892d4e8fbfa21"} Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.928385 4874 generic.go:334] "Generic (PLEG): container finished" podID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerID="611bb3726f9ac8d4d0996006783e1c03244e9b6c3e42e579bbc8646e3bf29f27" exitCode=0 Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.928437 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqfd6" event={"ID":"19397da4-8b1f-4ec8-969c-2856e64112fc","Type":"ContainerDied","Data":"611bb3726f9ac8d4d0996006783e1c03244e9b6c3e42e579bbc8646e3bf29f27"} Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.928463 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqfd6" event={"ID":"19397da4-8b1f-4ec8-969c-2856e64112fc","Type":"ContainerStarted","Data":"a7a9883ca17a3726d68d3d6125b805d52ff0d5a5c046f96000ec05186d3e0d94"} Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.942791 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a90813fb-4837-4807-83a2-c59e59532597","Type":"ContainerStarted","Data":"42408ac7507cbc1807a4541f42c782d788d3778f96983c490d0fce3f4d14d87c"} Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.942848 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a90813fb-4837-4807-83a2-c59e59532597","Type":"ContainerStarted","Data":"f7b821eef39a1284f53c7d63b939cffe9032da2826a2a1f1d1b0ff43263e5da8"} Feb 17 16:05:41 crc kubenswrapper[4874]: I0217 16:05:41.996489 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.9964744859999999 podStartE2EDuration="1.996474486s" podCreationTimestamp="2026-02-17 16:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:41.995100703 +0000 UTC m=+152.289489264" watchObservedRunningTime="2026-02-17 16:05:41.996474486 +0000 UTC m=+152.290863047" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.105403 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.105460 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.115786 4874 patch_prober.go:28] interesting pod/console-f9d7485db-6wpw5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.115860 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6wpw5" podUID="cfccd2a3-037d-4b17-a269-952847ad533a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.128158 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgnxq"] Feb 17 16:05:42 crc kubenswrapper[4874]: W0217 16:05:42.159554 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod065c00cb_7ec7_428e_a10a_aaf6335d63e1.slice/crio-64c03702b0918678f163a9c1ca9d6c6cbc0c0aac33498c83a92689f777cd7e9b WatchSource:0}: Error finding container 64c03702b0918678f163a9c1ca9d6c6cbc0c0aac33498c83a92689f777cd7e9b: Status 404 returned error can't find the container with id 64c03702b0918678f163a9c1ca9d6c6cbc0c0aac33498c83a92689f777cd7e9b Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.385977 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.606914 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.607673 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.609884 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.610741 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.612340 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.662724 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.666135 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:42 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:42 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:42 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.666197 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.742737 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbc255b7-7772-434d-897d-c21948fc01c4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cbc255b7-7772-434d-897d-c21948fc01c4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.742804 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbc255b7-7772-434d-897d-c21948fc01c4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cbc255b7-7772-434d-897d-c21948fc01c4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.843673 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbc255b7-7772-434d-897d-c21948fc01c4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cbc255b7-7772-434d-897d-c21948fc01c4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.843741 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbc255b7-7772-434d-897d-c21948fc01c4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cbc255b7-7772-434d-897d-c21948fc01c4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.843847 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbc255b7-7772-434d-897d-c21948fc01c4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cbc255b7-7772-434d-897d-c21948fc01c4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.860620 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbc255b7-7772-434d-897d-c21948fc01c4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cbc255b7-7772-434d-897d-c21948fc01c4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.932899 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.956776 4874 generic.go:334] "Generic (PLEG): container finished" podID="a90813fb-4837-4807-83a2-c59e59532597" containerID="42408ac7507cbc1807a4541f42c782d788d3778f96983c490d0fce3f4d14d87c" exitCode=0 Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.956824 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a90813fb-4837-4807-83a2-c59e59532597","Type":"ContainerDied","Data":"42408ac7507cbc1807a4541f42c782d788d3778f96983c490d0fce3f4d14d87c"} Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.961182 4874 generic.go:334] "Generic (PLEG): container finished" podID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerID="686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16" exitCode=0 Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.961215 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgnxq" event={"ID":"065c00cb-7ec7-428e-a10a-aaf6335d63e1","Type":"ContainerDied","Data":"686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16"} Feb 17 16:05:42 crc kubenswrapper[4874]: I0217 16:05:42.961234 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgnxq" event={"ID":"065c00cb-7ec7-428e-a10a-aaf6335d63e1","Type":"ContainerStarted","Data":"64c03702b0918678f163a9c1ca9d6c6cbc0c0aac33498c83a92689f777cd7e9b"} Feb 17 16:05:43 crc kubenswrapper[4874]: I0217 16:05:43.153948 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 16:05:43 crc kubenswrapper[4874]: W0217 16:05:43.170324 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcbc255b7_7772_434d_897d_c21948fc01c4.slice/crio-c03a839abc098619785a1fd536b3699ff746c5d945fa6ba4677a3f38106735ee WatchSource:0}: Error finding container c03a839abc098619785a1fd536b3699ff746c5d945fa6ba4677a3f38106735ee: Status 404 returned error can't find the container with id c03a839abc098619785a1fd536b3699ff746c5d945fa6ba4677a3f38106735ee Feb 17 16:05:43 crc kubenswrapper[4874]: I0217 16:05:43.665798 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:43 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Feb 17 16:05:43 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:43 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:43 crc kubenswrapper[4874]: I0217 16:05:43.666120 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:43 crc kubenswrapper[4874]: I0217 16:05:43.971701 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbc255b7-7772-434d-897d-c21948fc01c4","Type":"ContainerStarted","Data":"c03a839abc098619785a1fd536b3699ff746c5d945fa6ba4677a3f38106735ee"} Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.312008 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.373419 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90813fb-4837-4807-83a2-c59e59532597-kube-api-access\") pod \"a90813fb-4837-4807-83a2-c59e59532597\" (UID: \"a90813fb-4837-4807-83a2-c59e59532597\") " Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.373463 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90813fb-4837-4807-83a2-c59e59532597-kubelet-dir\") pod \"a90813fb-4837-4807-83a2-c59e59532597\" (UID: \"a90813fb-4837-4807-83a2-c59e59532597\") " Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.373752 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a90813fb-4837-4807-83a2-c59e59532597-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a90813fb-4837-4807-83a2-c59e59532597" (UID: "a90813fb-4837-4807-83a2-c59e59532597"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.379249 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90813fb-4837-4807-83a2-c59e59532597-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a90813fb-4837-4807-83a2-c59e59532597" (UID: "a90813fb-4837-4807-83a2-c59e59532597"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.474760 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a90813fb-4837-4807-83a2-c59e59532597-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.474786 4874 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a90813fb-4837-4807-83a2-c59e59532597-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.665521 4874 patch_prober.go:28] interesting pod/router-default-5444994796-pmtgc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:44 crc kubenswrapper[4874]: [+]has-synced ok Feb 17 16:05:44 crc kubenswrapper[4874]: [+]process-running ok Feb 17 16:05:44 crc kubenswrapper[4874]: healthz check failed Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.665580 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pmtgc" podUID="ac5f5138-7075-4b42-b2f7-7eb4b7c18fea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.982707 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.983436 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a90813fb-4837-4807-83a2-c59e59532597","Type":"ContainerDied","Data":"f7b821eef39a1284f53c7d63b939cffe9032da2826a2a1f1d1b0ff43263e5da8"} Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.983488 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7b821eef39a1284f53c7d63b939cffe9032da2826a2a1f1d1b0ff43263e5da8" Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.991820 4874 generic.go:334] "Generic (PLEG): container finished" podID="cbc255b7-7772-434d-897d-c21948fc01c4" containerID="b13441a0c45576dd5657f7ded7f3ab3c83314b01f2c49ad6790220d62303bf5b" exitCode=0 Feb 17 16:05:44 crc kubenswrapper[4874]: I0217 16:05:44.991861 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbc255b7-7772-434d-897d-c21948fc01c4","Type":"ContainerDied","Data":"b13441a0c45576dd5657f7ded7f3ab3c83314b01f2c49ad6790220d62303bf5b"} Feb 17 16:05:45 crc kubenswrapper[4874]: I0217 16:05:45.667265 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:45 crc kubenswrapper[4874]: I0217 16:05:45.679127 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pmtgc" Feb 17 16:05:47 crc kubenswrapper[4874]: I0217 16:05:47.449672 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jt8g9" Feb 17 16:05:51 crc kubenswrapper[4874]: I0217 16:05:51.672976 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fchf8" Feb 17 16:05:52 crc kubenswrapper[4874]: I0217 16:05:52.109423 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:52 crc kubenswrapper[4874]: I0217 16:05:52.114196 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:05:52 crc kubenswrapper[4874]: I0217 16:05:52.853769 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:52 crc kubenswrapper[4874]: I0217 16:05:52.864306 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/672da34f-1e37-4e2c-b467-b5ee40c4a31b-metrics-certs\") pod \"network-metrics-daemon-pm48m\" (UID: \"672da34f-1e37-4e2c-b467-b5ee40c4a31b\") " pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:53 crc kubenswrapper[4874]: I0217 16:05:53.083702 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pm48m" Feb 17 16:05:53 crc kubenswrapper[4874]: I0217 16:05:53.766303 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:53 crc kubenswrapper[4874]: I0217 16:05:53.866275 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbc255b7-7772-434d-897d-c21948fc01c4-kubelet-dir\") pod \"cbc255b7-7772-434d-897d-c21948fc01c4\" (UID: \"cbc255b7-7772-434d-897d-c21948fc01c4\") " Feb 17 16:05:53 crc kubenswrapper[4874]: I0217 16:05:53.866468 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbc255b7-7772-434d-897d-c21948fc01c4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cbc255b7-7772-434d-897d-c21948fc01c4" (UID: "cbc255b7-7772-434d-897d-c21948fc01c4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4874]: I0217 16:05:53.866645 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbc255b7-7772-434d-897d-c21948fc01c4-kube-api-access\") pod \"cbc255b7-7772-434d-897d-c21948fc01c4\" (UID: \"cbc255b7-7772-434d-897d-c21948fc01c4\") " Feb 17 16:05:53 crc kubenswrapper[4874]: I0217 16:05:53.866953 4874 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbc255b7-7772-434d-897d-c21948fc01c4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4874]: I0217 16:05:53.872488 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc255b7-7772-434d-897d-c21948fc01c4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cbc255b7-7772-434d-897d-c21948fc01c4" (UID: "cbc255b7-7772-434d-897d-c21948fc01c4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4874]: I0217 16:05:53.968678 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbc255b7-7772-434d-897d-c21948fc01c4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:54 crc kubenswrapper[4874]: I0217 16:05:54.068097 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbc255b7-7772-434d-897d-c21948fc01c4","Type":"ContainerDied","Data":"c03a839abc098619785a1fd536b3699ff746c5d945fa6ba4677a3f38106735ee"} Feb 17 16:05:54 crc kubenswrapper[4874]: I0217 16:05:54.068144 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c03a839abc098619785a1fd536b3699ff746c5d945fa6ba4677a3f38106735ee" Feb 17 16:05:54 crc kubenswrapper[4874]: I0217 16:05:54.068204 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:54 crc kubenswrapper[4874]: I0217 16:05:54.176067 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pm48m"] Feb 17 16:05:57 crc kubenswrapper[4874]: I0217 16:05:57.725189 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:05:57 crc kubenswrapper[4874]: I0217 16:05:57.725604 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:05:59 crc kubenswrapper[4874]: I0217 16:05:59.535623 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:06:06 crc kubenswrapper[4874]: I0217 16:06:06.141695 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pm48m" event={"ID":"672da34f-1e37-4e2c-b467-b5ee40c4a31b","Type":"ContainerStarted","Data":"f3afef2ec3c6126a317fffe91d9fdb63ea91e17f384e0da6055b86a77d486f8e"} Feb 17 16:06:11 crc kubenswrapper[4874]: E0217 16:06:11.398318 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 16:06:11 crc kubenswrapper[4874]: E0217 16:06:11.399398 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8glj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jqfd6_openshift-marketplace(19397da4-8b1f-4ec8-969c-2856e64112fc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:06:11 crc kubenswrapper[4874]: E0217 16:06:11.400647 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jqfd6" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" Feb 17 16:06:11 crc kubenswrapper[4874]: E0217 16:06:11.493485 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 16:06:11 crc kubenswrapper[4874]: E0217 16:06:11.494298 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp4z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jvjxj_openshift-marketplace(777c2139-1b69-4526-b1a0-537c84c3fc02): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:06:11 crc kubenswrapper[4874]: E0217 16:06:11.495587 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jvjxj" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" Feb 17 16:06:12 crc kubenswrapper[4874]: I0217 16:06:12.377218 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7k8j" Feb 17 16:06:13 crc kubenswrapper[4874]: E0217 16:06:13.182805 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jvjxj" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" Feb 17 16:06:13 crc kubenswrapper[4874]: E0217 16:06:13.185345 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jqfd6" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" Feb 17 16:06:13 crc kubenswrapper[4874]: E0217 16:06:13.278344 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 17 16:06:13 crc kubenswrapper[4874]: E0217 16:06:13.278523 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wknqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xtlqz_openshift-marketplace(28be448a-a2cb-4731-85fa-ec01026d5763): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:06:13 crc kubenswrapper[4874]: E0217 16:06:13.279712 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xtlqz" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.336596 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xtlqz" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.397870 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.398030 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zgdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bgnxq_openshift-marketplace(065c00cb-7ec7-428e-a10a-aaf6335d63e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.399240 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bgnxq" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.420916 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.421288 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znn86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sth6c_openshift-marketplace(52ea909f-1a30-4a49-9b48-d6a6135a4598): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.422457 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-sth6c" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.432700 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.432878 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jv4xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7h6dq_openshift-marketplace(73d464fc-2d1d-4a29-ae06-5d29503f6545): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.434136 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7h6dq" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.465307 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.465450 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgkq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v8fn8_openshift-marketplace(cfc01af4-cec4-4d66-b673-ac10e1797059): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:06:14 crc kubenswrapper[4874]: E0217 16:06:14.466607 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-v8fn8" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" Feb 17 16:06:15 crc kubenswrapper[4874]: I0217 16:06:15.197788 4874 generic.go:334] "Generic (PLEG): container finished" podID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerID="72885fee2c84856c40c7d3fba597566c6aff0abdae36ea092e648364e7243850" exitCode=0 Feb 17 16:06:15 crc kubenswrapper[4874]: I0217 16:06:15.198428 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdr2g" event={"ID":"52637c6d-7cd3-4761-b70d-4e07e68a6c5d","Type":"ContainerDied","Data":"72885fee2c84856c40c7d3fba597566c6aff0abdae36ea092e648364e7243850"} Feb 17 16:06:15 crc kubenswrapper[4874]: I0217 16:06:15.205467 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pm48m" event={"ID":"672da34f-1e37-4e2c-b467-b5ee40c4a31b","Type":"ContainerStarted","Data":"67aee02f93b3e864bf3c370571352348b441c4906783e7c56311a8ecfd326d91"} Feb 17 16:06:15 crc kubenswrapper[4874]: I0217 16:06:15.205526 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pm48m" event={"ID":"672da34f-1e37-4e2c-b467-b5ee40c4a31b","Type":"ContainerStarted","Data":"b78e93ac568d4650ba9ec2beadc5be61d263344ef652512b293e67bf3f88b056"} Feb 17 16:06:15 crc kubenswrapper[4874]: E0217 16:06:15.206917 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7h6dq" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" Feb 17 16:06:15 crc kubenswrapper[4874]: E0217 16:06:15.207965 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v8fn8" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" Feb 17 16:06:15 crc kubenswrapper[4874]: E0217 16:06:15.208959 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sth6c" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" Feb 17 16:06:15 crc kubenswrapper[4874]: E0217 16:06:15.209107 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bgnxq" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" Feb 17 16:06:15 crc kubenswrapper[4874]: I0217 16:06:15.291204 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pm48m" podStartSLOduration=165.291185467 podStartE2EDuration="2m45.291185467s" podCreationTimestamp="2026-02-17 16:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:15.289299151 +0000 UTC m=+185.583687722" watchObservedRunningTime="2026-02-17 16:06:15.291185467 +0000 UTC m=+185.585574048" Feb 17 16:06:16 crc kubenswrapper[4874]: I0217 16:06:16.214694 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdr2g" event={"ID":"52637c6d-7cd3-4761-b70d-4e07e68a6c5d","Type":"ContainerStarted","Data":"b6d43dd1e78a034d756068b939b3582ab72eb7cfa2d37eb0e3597c8674a10c71"} Feb 17 16:06:18 crc kubenswrapper[4874]: I0217 16:06:18.214357 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:06:18 crc kubenswrapper[4874]: I0217 16:06:18.215126 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:06:18 crc kubenswrapper[4874]: I0217 16:06:18.611332 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:06:18 crc kubenswrapper[4874]: I0217 16:06:18.633125 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kdr2g" podStartSLOduration=5.819041214 podStartE2EDuration="41.633073132s" podCreationTimestamp="2026-02-17 16:05:37 +0000 UTC" firstStartedPulling="2026-02-17 16:05:39.838802057 +0000 UTC m=+150.133190618" lastFinishedPulling="2026-02-17 16:06:15.652833975 +0000 UTC m=+185.947222536" observedRunningTime="2026-02-17 16:06:16.252904616 +0000 UTC m=+186.547293187" watchObservedRunningTime="2026-02-17 16:06:18.633073132 +0000 UTC m=+188.927461733" Feb 17 16:06:19 crc kubenswrapper[4874]: I0217 16:06:19.502749 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kdr2g" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:19 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:19 crc kubenswrapper[4874]: > Feb 17 16:06:20 crc kubenswrapper[4874]: I0217 16:06:20.643776 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2kf8w"] Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.216147 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 16:06:22 crc kubenswrapper[4874]: E0217 16:06:22.216725 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90813fb-4837-4807-83a2-c59e59532597" containerName="pruner" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.216738 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90813fb-4837-4807-83a2-c59e59532597" containerName="pruner" Feb 17 16:06:22 crc kubenswrapper[4874]: E0217 16:06:22.216768 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc255b7-7772-434d-897d-c21948fc01c4" containerName="pruner" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.216773 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc255b7-7772-434d-897d-c21948fc01c4" containerName="pruner" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.216897 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90813fb-4837-4807-83a2-c59e59532597" containerName="pruner" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.216907 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc255b7-7772-434d-897d-c21948fc01c4" containerName="pruner" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.217374 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.217663 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.220496 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.220709 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.230506 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.230594 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.332429 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.332501 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.332595 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.363158 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.540169 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:22 crc kubenswrapper[4874]: I0217 16:06:22.947993 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 16:06:23 crc kubenswrapper[4874]: I0217 16:06:23.255243 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b","Type":"ContainerStarted","Data":"7a0dfb095d9f7bfabcedbb62cefecd081ef1c3adff2b6b64eb5cc3c5cf6e0ba2"} Feb 17 16:06:24 crc kubenswrapper[4874]: I0217 16:06:24.261500 4874 generic.go:334] "Generic (PLEG): container finished" podID="c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b" containerID="51de4d926eba9ccc866a387bc64435b62b5cf2949478247b41ee8fb2d975ea96" exitCode=0 Feb 17 16:06:24 crc kubenswrapper[4874]: I0217 16:06:24.261541 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b","Type":"ContainerDied","Data":"51de4d926eba9ccc866a387bc64435b62b5cf2949478247b41ee8fb2d975ea96"} Feb 17 16:06:25 crc kubenswrapper[4874]: I0217 16:06:25.541461 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:25 crc kubenswrapper[4874]: I0217 16:06:25.672336 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kube-api-access\") pod \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\" (UID: \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\") " Feb 17 16:06:25 crc kubenswrapper[4874]: I0217 16:06:25.672466 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kubelet-dir\") pod \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\" (UID: \"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b\") " Feb 17 16:06:25 crc kubenswrapper[4874]: I0217 16:06:25.672651 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b" (UID: "c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:25 crc kubenswrapper[4874]: I0217 16:06:25.682925 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b" (UID: "c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:25 crc kubenswrapper[4874]: I0217 16:06:25.774420 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:25 crc kubenswrapper[4874]: I0217 16:06:25.774477 4874 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:26 crc kubenswrapper[4874]: I0217 16:06:26.271451 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b","Type":"ContainerDied","Data":"7a0dfb095d9f7bfabcedbb62cefecd081ef1c3adff2b6b64eb5cc3c5cf6e0ba2"} Feb 17 16:06:26 crc kubenswrapper[4874]: I0217 16:06:26.271681 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a0dfb095d9f7bfabcedbb62cefecd081ef1c3adff2b6b64eb5cc3c5cf6e0ba2" Feb 17 16:06:26 crc kubenswrapper[4874]: I0217 16:06:26.271543 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:27 crc kubenswrapper[4874]: I0217 16:06:27.283487 4874 generic.go:334] "Generic (PLEG): container finished" podID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerID="73b3bc07e84e8165095743aa58f6369aac22a58d6c7342e0a45a1029338b23e7" exitCode=0 Feb 17 16:06:27 crc kubenswrapper[4874]: I0217 16:06:27.283540 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvjxj" event={"ID":"777c2139-1b69-4526-b1a0-537c84c3fc02","Type":"ContainerDied","Data":"73b3bc07e84e8165095743aa58f6369aac22a58d6c7342e0a45a1029338b23e7"} Feb 17 16:06:27 crc kubenswrapper[4874]: I0217 16:06:27.724418 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:06:27 crc kubenswrapper[4874]: I0217 16:06:27.724691 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.265383 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.292012 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvjxj" event={"ID":"777c2139-1b69-4526-b1a0-537c84c3fc02","Type":"ContainerStarted","Data":"1500213b2be717cdba28e9dd9353d716924370793c4c0b31f917c62ca5ab00ea"} Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.297892 4874 generic.go:334] "Generic (PLEG): container finished" podID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerID="5d50358589fe7734aa15e177b695a772b38a152699e2e276924f312680c9ce2f" exitCode=0 Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.297974 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqfd6" event={"ID":"19397da4-8b1f-4ec8-969c-2856e64112fc","Type":"ContainerDied","Data":"5d50358589fe7734aa15e177b695a772b38a152699e2e276924f312680c9ce2f"} Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.300269 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtlqz" event={"ID":"28be448a-a2cb-4731-85fa-ec01026d5763","Type":"ContainerStarted","Data":"9eca76966b8ffd2698c60064256c8742a2a3349e89271df95a4420d27c617862"} Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.301491 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.303028 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fn8" event={"ID":"cfc01af4-cec4-4d66-b673-ac10e1797059","Type":"ContainerStarted","Data":"deead2b5fc206fb55b2908a4e55c1d803f9e3da51e804bb155e261a53188f981"} Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.313477 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jvjxj" podStartSLOduration=2.441867993 podStartE2EDuration="50.313460286s" podCreationTimestamp="2026-02-17 16:05:38 +0000 UTC" firstStartedPulling="2026-02-17 16:05:39.82820535 +0000 UTC m=+150.122593911" lastFinishedPulling="2026-02-17 16:06:27.699797643 +0000 UTC m=+197.994186204" observedRunningTime="2026-02-17 16:06:28.309837118 +0000 UTC m=+198.604225689" watchObservedRunningTime="2026-02-17 16:06:28.313460286 +0000 UTC m=+198.607848857" Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.694127 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:06:28 crc kubenswrapper[4874]: I0217 16:06:28.694370 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.311862 4874 generic.go:334] "Generic (PLEG): container finished" podID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerID="deead2b5fc206fb55b2908a4e55c1d803f9e3da51e804bb155e261a53188f981" exitCode=0 Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.312040 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fn8" event={"ID":"cfc01af4-cec4-4d66-b673-ac10e1797059","Type":"ContainerDied","Data":"deead2b5fc206fb55b2908a4e55c1d803f9e3da51e804bb155e261a53188f981"} Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.314245 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqfd6" event={"ID":"19397da4-8b1f-4ec8-969c-2856e64112fc","Type":"ContainerStarted","Data":"c89756710e2ee5671bec2aa80f0b2ec84ae91844345cb4c7b752c53fb9ad0a8e"} Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.324305 4874 generic.go:334] "Generic (PLEG): container finished" podID="28be448a-a2cb-4731-85fa-ec01026d5763" containerID="9eca76966b8ffd2698c60064256c8742a2a3349e89271df95a4420d27c617862" exitCode=0 Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.324330 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtlqz" event={"ID":"28be448a-a2cb-4731-85fa-ec01026d5763","Type":"ContainerDied","Data":"9eca76966b8ffd2698c60064256c8742a2a3349e89271df95a4420d27c617862"} Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.385931 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jqfd6" podStartSLOduration=1.470806707 podStartE2EDuration="48.385914348s" podCreationTimestamp="2026-02-17 16:05:41 +0000 UTC" firstStartedPulling="2026-02-17 16:05:41.937309449 +0000 UTC m=+152.231698010" lastFinishedPulling="2026-02-17 16:06:28.85241709 +0000 UTC m=+199.146805651" observedRunningTime="2026-02-17 16:06:29.382577969 +0000 UTC m=+199.676966530" watchObservedRunningTime="2026-02-17 16:06:29.385914348 +0000 UTC m=+199.680302929" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.598769 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 16:06:29 crc kubenswrapper[4874]: E0217 16:06:29.598981 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b" containerName="pruner" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.598992 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b" containerName="pruner" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.599123 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c072e4a0-8290-4fef-9bb0-2eaccb1d4b4b" containerName="pruner" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.599468 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.602398 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.603045 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.609199 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.628060 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-var-lock\") pod \"installer-9-crc\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.628133 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.628193 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c11461e-921f-46b7-ba51-5299829f22f1-kube-api-access\") pod \"installer-9-crc\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.729755 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-var-lock\") pod \"installer-9-crc\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.729792 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.729857 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-var-lock\") pod \"installer-9-crc\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.729882 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.729868 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c11461e-921f-46b7-ba51-5299829f22f1-kube-api-access\") pod \"installer-9-crc\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.747674 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c11461e-921f-46b7-ba51-5299829f22f1-kube-api-access\") pod \"installer-9-crc\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.752665 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jvjxj" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:29 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:29 crc kubenswrapper[4874]: > Feb 17 16:06:29 crc kubenswrapper[4874]: I0217 16:06:29.911448 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.138501 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.331948 4874 generic.go:334] "Generic (PLEG): container finished" podID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerID="f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584" exitCode=0 Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.332208 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sth6c" event={"ID":"52ea909f-1a30-4a49-9b48-d6a6135a4598","Type":"ContainerDied","Data":"f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584"} Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.337172 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtlqz" event={"ID":"28be448a-a2cb-4731-85fa-ec01026d5763","Type":"ContainerStarted","Data":"4d5af2bfd9bee7f08d882406c041affaaaf9ff1fdac2beceab3f67614c4ce19a"} Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.339349 4874 generic.go:334] "Generic (PLEG): container finished" podID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerID="25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783" exitCode=0 Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.339400 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgnxq" event={"ID":"065c00cb-7ec7-428e-a10a-aaf6335d63e1","Type":"ContainerDied","Data":"25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783"} Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.344023 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3c11461e-921f-46b7-ba51-5299829f22f1","Type":"ContainerStarted","Data":"becc8ad735f165959f7ea9a191d9a0b0afcfebca46bd0523791b99e9617b5623"} Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.356610 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fn8" event={"ID":"cfc01af4-cec4-4d66-b673-ac10e1797059","Type":"ContainerStarted","Data":"ead35cbcc3b00d503e8db4945db4ec37c2ced571ce78f6dcce00c853db3f1e19"} Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.363365 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7h6dq" event={"ID":"73d464fc-2d1d-4a29-ae06-5d29503f6545","Type":"ContainerStarted","Data":"7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4"} Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.376221 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v8fn8" podStartSLOduration=2.582282856 podStartE2EDuration="50.376205503s" podCreationTimestamp="2026-02-17 16:05:40 +0000 UTC" firstStartedPulling="2026-02-17 16:05:41.916812401 +0000 UTC m=+152.211200962" lastFinishedPulling="2026-02-17 16:06:29.710735048 +0000 UTC m=+200.005123609" observedRunningTime="2026-02-17 16:06:30.373452409 +0000 UTC m=+200.667840970" watchObservedRunningTime="2026-02-17 16:06:30.376205503 +0000 UTC m=+200.670594064" Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.389528 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.392269 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:06:30 crc kubenswrapper[4874]: I0217 16:06:30.421719 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xtlqz" podStartSLOduration=2.441618778 podStartE2EDuration="52.421699685s" podCreationTimestamp="2026-02-17 16:05:38 +0000 UTC" firstStartedPulling="2026-02-17 16:05:39.823429964 +0000 UTC m=+150.117818525" lastFinishedPulling="2026-02-17 16:06:29.803510871 +0000 UTC m=+200.097899432" observedRunningTime="2026-02-17 16:06:30.419420234 +0000 UTC m=+200.713808795" watchObservedRunningTime="2026-02-17 16:06:30.421699685 +0000 UTC m=+200.716088246" Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.369055 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sth6c" event={"ID":"52ea909f-1a30-4a49-9b48-d6a6135a4598","Type":"ContainerStarted","Data":"d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c"} Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.370673 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgnxq" event={"ID":"065c00cb-7ec7-428e-a10a-aaf6335d63e1","Type":"ContainerStarted","Data":"6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200"} Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.373454 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3c11461e-921f-46b7-ba51-5299829f22f1","Type":"ContainerStarted","Data":"049d05057fcf817d859b3aa4a0081c86eb24a82a9fec7de7c5ce24e7ab464798"} Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.374845 4874 generic.go:334] "Generic (PLEG): container finished" podID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerID="7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4" exitCode=0 Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.374972 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7h6dq" event={"ID":"73d464fc-2d1d-4a29-ae06-5d29503f6545","Type":"ContainerDied","Data":"7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4"} Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.384876 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sth6c" podStartSLOduration=2.483171989 podStartE2EDuration="51.38486594s" podCreationTimestamp="2026-02-17 16:05:40 +0000 UTC" firstStartedPulling="2026-02-17 16:05:41.925317147 +0000 UTC m=+152.219705708" lastFinishedPulling="2026-02-17 16:06:30.827011098 +0000 UTC m=+201.121399659" observedRunningTime="2026-02-17 16:06:31.384838519 +0000 UTC m=+201.679227080" watchObservedRunningTime="2026-02-17 16:06:31.38486594 +0000 UTC m=+201.679254501" Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.420419 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bgnxq" podStartSLOduration=2.43670247 podStartE2EDuration="50.420404165s" podCreationTimestamp="2026-02-17 16:05:41 +0000 UTC" firstStartedPulling="2026-02-17 16:05:42.971917909 +0000 UTC m=+153.266306470" lastFinishedPulling="2026-02-17 16:06:30.955619594 +0000 UTC m=+201.250008165" observedRunningTime="2026-02-17 16:06:31.418903735 +0000 UTC m=+201.713292326" watchObservedRunningTime="2026-02-17 16:06:31.420404165 +0000 UTC m=+201.714792726" Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.433150 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-v8fn8" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:31 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:31 crc kubenswrapper[4874]: > Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.435931 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.435921592 podStartE2EDuration="2.435921592s" podCreationTimestamp="2026-02-17 16:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:31.433092236 +0000 UTC m=+201.727480797" watchObservedRunningTime="2026-02-17 16:06:31.435921592 +0000 UTC m=+201.730310153" Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.450368 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.450412 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.822954 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:06:31 crc kubenswrapper[4874]: I0217 16:06:31.823002 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:06:32 crc kubenswrapper[4874]: I0217 16:06:32.381479 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7h6dq" event={"ID":"73d464fc-2d1d-4a29-ae06-5d29503f6545","Type":"ContainerStarted","Data":"530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069"} Feb 17 16:06:32 crc kubenswrapper[4874]: I0217 16:06:32.404349 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7h6dq" podStartSLOduration=2.422569943 podStartE2EDuration="54.404333898s" podCreationTimestamp="2026-02-17 16:05:38 +0000 UTC" firstStartedPulling="2026-02-17 16:05:39.812640921 +0000 UTC m=+150.107029482" lastFinishedPulling="2026-02-17 16:06:31.794404866 +0000 UTC m=+202.088793437" observedRunningTime="2026-02-17 16:06:32.402493739 +0000 UTC m=+202.696882320" watchObservedRunningTime="2026-02-17 16:06:32.404333898 +0000 UTC m=+202.698722469" Feb 17 16:06:32 crc kubenswrapper[4874]: I0217 16:06:32.490406 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jqfd6" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:32 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:32 crc kubenswrapper[4874]: > Feb 17 16:06:32 crc kubenswrapper[4874]: I0217 16:06:32.867482 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bgnxq" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:32 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:32 crc kubenswrapper[4874]: > Feb 17 16:06:38 crc kubenswrapper[4874]: I0217 16:06:38.398500 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:06:38 crc kubenswrapper[4874]: I0217 16:06:38.399466 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:06:38 crc kubenswrapper[4874]: I0217 16:06:38.474935 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:06:38 crc kubenswrapper[4874]: I0217 16:06:38.575716 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:06:38 crc kubenswrapper[4874]: I0217 16:06:38.748715 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:06:38 crc kubenswrapper[4874]: I0217 16:06:38.798045 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:06:38 crc kubenswrapper[4874]: I0217 16:06:38.906154 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:06:38 crc kubenswrapper[4874]: I0217 16:06:38.906312 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:06:38 crc kubenswrapper[4874]: I0217 16:06:38.976554 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:06:39 crc kubenswrapper[4874]: I0217 16:06:39.520231 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:06:40 crc kubenswrapper[4874]: I0217 16:06:40.470707 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:06:40 crc kubenswrapper[4874]: I0217 16:06:40.494824 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7h6dq"] Feb 17 16:06:40 crc kubenswrapper[4874]: I0217 16:06:40.546050 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:06:40 crc kubenswrapper[4874]: I0217 16:06:40.831777 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:06:40 crc kubenswrapper[4874]: I0217 16:06:40.831859 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:06:40 crc kubenswrapper[4874]: I0217 16:06:40.896526 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.093146 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jvjxj"] Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.093460 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jvjxj" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerName="registry-server" containerID="cri-o://1500213b2be717cdba28e9dd9353d716924370793c4c0b31f917c62ca5ab00ea" gracePeriod=2 Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.463296 4874 generic.go:334] "Generic (PLEG): container finished" podID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerID="1500213b2be717cdba28e9dd9353d716924370793c4c0b31f917c62ca5ab00ea" exitCode=0 Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.463363 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvjxj" event={"ID":"777c2139-1b69-4526-b1a0-537c84c3fc02","Type":"ContainerDied","Data":"1500213b2be717cdba28e9dd9353d716924370793c4c0b31f917c62ca5ab00ea"} Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.464162 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7h6dq" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerName="registry-server" containerID="cri-o://530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069" gracePeriod=2 Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.508961 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.509313 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.549009 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.871971 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.905763 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:06:41 crc kubenswrapper[4874]: I0217 16:06:41.941139 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.044567 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-utilities\") pod \"73d464fc-2d1d-4a29-ae06-5d29503f6545\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.044721 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-catalog-content\") pod \"73d464fc-2d1d-4a29-ae06-5d29503f6545\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.044807 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv4xd\" (UniqueName: \"kubernetes.io/projected/73d464fc-2d1d-4a29-ae06-5d29503f6545-kube-api-access-jv4xd\") pod \"73d464fc-2d1d-4a29-ae06-5d29503f6545\" (UID: \"73d464fc-2d1d-4a29-ae06-5d29503f6545\") " Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.045593 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-utilities" (OuterVolumeSpecName: "utilities") pod "73d464fc-2d1d-4a29-ae06-5d29503f6545" (UID: "73d464fc-2d1d-4a29-ae06-5d29503f6545"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.049419 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d464fc-2d1d-4a29-ae06-5d29503f6545-kube-api-access-jv4xd" (OuterVolumeSpecName: "kube-api-access-jv4xd") pod "73d464fc-2d1d-4a29-ae06-5d29503f6545" (UID: "73d464fc-2d1d-4a29-ae06-5d29503f6545"). InnerVolumeSpecName "kube-api-access-jv4xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.059615 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.145677 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-catalog-content\") pod \"777c2139-1b69-4526-b1a0-537c84c3fc02\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.145813 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp4z7\" (UniqueName: \"kubernetes.io/projected/777c2139-1b69-4526-b1a0-537c84c3fc02-kube-api-access-jp4z7\") pod \"777c2139-1b69-4526-b1a0-537c84c3fc02\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.145911 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-utilities\") pod \"777c2139-1b69-4526-b1a0-537c84c3fc02\" (UID: \"777c2139-1b69-4526-b1a0-537c84c3fc02\") " Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.146320 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv4xd\" (UniqueName: \"kubernetes.io/projected/73d464fc-2d1d-4a29-ae06-5d29503f6545-kube-api-access-jv4xd\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.146346 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.147099 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-utilities" (OuterVolumeSpecName: "utilities") pod "777c2139-1b69-4526-b1a0-537c84c3fc02" (UID: "777c2139-1b69-4526-b1a0-537c84c3fc02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.149117 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777c2139-1b69-4526-b1a0-537c84c3fc02-kube-api-access-jp4z7" (OuterVolumeSpecName: "kube-api-access-jp4z7") pod "777c2139-1b69-4526-b1a0-537c84c3fc02" (UID: "777c2139-1b69-4526-b1a0-537c84c3fc02"). InnerVolumeSpecName "kube-api-access-jp4z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.193003 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "777c2139-1b69-4526-b1a0-537c84c3fc02" (UID: "777c2139-1b69-4526-b1a0-537c84c3fc02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.247300 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp4z7\" (UniqueName: \"kubernetes.io/projected/777c2139-1b69-4526-b1a0-537c84c3fc02-kube-api-access-jp4z7\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.247350 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.247370 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777c2139-1b69-4526-b1a0-537c84c3fc02-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.386752 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73d464fc-2d1d-4a29-ae06-5d29503f6545" (UID: "73d464fc-2d1d-4a29-ae06-5d29503f6545"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.450862 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d464fc-2d1d-4a29-ae06-5d29503f6545-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.478675 4874 generic.go:334] "Generic (PLEG): container finished" podID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerID="530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069" exitCode=0 Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.478727 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7h6dq" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.478809 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7h6dq" event={"ID":"73d464fc-2d1d-4a29-ae06-5d29503f6545","Type":"ContainerDied","Data":"530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069"} Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.478894 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7h6dq" event={"ID":"73d464fc-2d1d-4a29-ae06-5d29503f6545","Type":"ContainerDied","Data":"988d02b1a42063f8924ca64cc1a84e55eae3b3301f6a29d4a5611521cc37ea09"} Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.478938 4874 scope.go:117] "RemoveContainer" containerID="530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.483371 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvjxj" event={"ID":"777c2139-1b69-4526-b1a0-537c84c3fc02","Type":"ContainerDied","Data":"7136024a37ab3f958f3e406d18dc728b073d29887f5822493d381c5673e2efa0"} Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.484115 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvjxj" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.508260 4874 scope.go:117] "RemoveContainer" containerID="7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.540847 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7h6dq"] Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.543483 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7h6dq"] Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.559293 4874 scope.go:117] "RemoveContainer" containerID="0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.575111 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jvjxj"] Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.578024 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jvjxj"] Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.582287 4874 scope.go:117] "RemoveContainer" containerID="530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069" Feb 17 16:06:42 crc kubenswrapper[4874]: E0217 16:06:42.582784 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069\": container with ID starting with 530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069 not found: ID does not exist" containerID="530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.582811 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069"} err="failed to get container status \"530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069\": rpc error: code = NotFound desc = could not find container \"530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069\": container with ID starting with 530a6db0ed498ab5fa682bb404abf6e51c04f3156c0c6f39ee98f762453e6069 not found: ID does not exist" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.582846 4874 scope.go:117] "RemoveContainer" containerID="7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4" Feb 17 16:06:42 crc kubenswrapper[4874]: E0217 16:06:42.583151 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4\": container with ID starting with 7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4 not found: ID does not exist" containerID="7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.583175 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4"} err="failed to get container status \"7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4\": rpc error: code = NotFound desc = could not find container \"7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4\": container with ID starting with 7d11d6f76a838a34f10057d506b51cca40268a11fbe3eeb45e5dd0354231e7c4 not found: ID does not exist" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.583187 4874 scope.go:117] "RemoveContainer" containerID="0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e" Feb 17 16:06:42 crc kubenswrapper[4874]: E0217 16:06:42.583415 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e\": container with ID starting with 0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e not found: ID does not exist" containerID="0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.583431 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e"} err="failed to get container status \"0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e\": rpc error: code = NotFound desc = could not find container \"0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e\": container with ID starting with 0c46307ba7cf9aefd9d65492a91d1e366c9f59e14d2661159a9b06d419ade38e not found: ID does not exist" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.583442 4874 scope.go:117] "RemoveContainer" containerID="1500213b2be717cdba28e9dd9353d716924370793c4c0b31f917c62ca5ab00ea" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.599236 4874 scope.go:117] "RemoveContainer" containerID="73b3bc07e84e8165095743aa58f6369aac22a58d6c7342e0a45a1029338b23e7" Feb 17 16:06:42 crc kubenswrapper[4874]: I0217 16:06:42.615548 4874 scope.go:117] "RemoveContainer" containerID="95eddb84372b96491cc79fa7ed4a4c5b76cb2d583e0f0f833d75af2b42731959" Feb 17 16:06:43 crc kubenswrapper[4874]: I0217 16:06:43.492274 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sth6c"] Feb 17 16:06:43 crc kubenswrapper[4874]: I0217 16:06:43.497532 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sth6c" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerName="registry-server" containerID="cri-o://d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c" gracePeriod=2 Feb 17 16:06:43 crc kubenswrapper[4874]: I0217 16:06:43.862086 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:06:43 crc kubenswrapper[4874]: I0217 16:06:43.973983 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-utilities\") pod \"52ea909f-1a30-4a49-9b48-d6a6135a4598\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " Feb 17 16:06:43 crc kubenswrapper[4874]: I0217 16:06:43.974120 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znn86\" (UniqueName: \"kubernetes.io/projected/52ea909f-1a30-4a49-9b48-d6a6135a4598-kube-api-access-znn86\") pod \"52ea909f-1a30-4a49-9b48-d6a6135a4598\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " Feb 17 16:06:43 crc kubenswrapper[4874]: I0217 16:06:43.974197 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-catalog-content\") pod \"52ea909f-1a30-4a49-9b48-d6a6135a4598\" (UID: \"52ea909f-1a30-4a49-9b48-d6a6135a4598\") " Feb 17 16:06:43 crc kubenswrapper[4874]: I0217 16:06:43.974762 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-utilities" (OuterVolumeSpecName: "utilities") pod "52ea909f-1a30-4a49-9b48-d6a6135a4598" (UID: "52ea909f-1a30-4a49-9b48-d6a6135a4598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:43 crc kubenswrapper[4874]: I0217 16:06:43.983069 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ea909f-1a30-4a49-9b48-d6a6135a4598-kube-api-access-znn86" (OuterVolumeSpecName: "kube-api-access-znn86") pod "52ea909f-1a30-4a49-9b48-d6a6135a4598" (UID: "52ea909f-1a30-4a49-9b48-d6a6135a4598"). InnerVolumeSpecName "kube-api-access-znn86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.024754 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52ea909f-1a30-4a49-9b48-d6a6135a4598" (UID: "52ea909f-1a30-4a49-9b48-d6a6135a4598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.075333 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.075375 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52ea909f-1a30-4a49-9b48-d6a6135a4598-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.075388 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znn86\" (UniqueName: \"kubernetes.io/projected/52ea909f-1a30-4a49-9b48-d6a6135a4598-kube-api-access-znn86\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.466675 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" path="/var/lib/kubelet/pods/73d464fc-2d1d-4a29-ae06-5d29503f6545/volumes" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.467633 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" path="/var/lib/kubelet/pods/777c2139-1b69-4526-b1a0-537c84c3fc02/volumes" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.506023 4874 generic.go:334] "Generic (PLEG): container finished" podID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerID="d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c" exitCode=0 Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.506094 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sth6c" event={"ID":"52ea909f-1a30-4a49-9b48-d6a6135a4598","Type":"ContainerDied","Data":"d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c"} Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.506136 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sth6c" event={"ID":"52ea909f-1a30-4a49-9b48-d6a6135a4598","Type":"ContainerDied","Data":"3948dd621b20f6b5f483650b7f990cf99a360e0be4182fe92fd892d4e8fbfa21"} Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.506158 4874 scope.go:117] "RemoveContainer" containerID="d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.506171 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sth6c" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.521236 4874 scope.go:117] "RemoveContainer" containerID="f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.533657 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sth6c"] Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.537426 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sth6c"] Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.544360 4874 scope.go:117] "RemoveContainer" containerID="98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.560681 4874 scope.go:117] "RemoveContainer" containerID="d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c" Feb 17 16:06:44 crc kubenswrapper[4874]: E0217 16:06:44.561041 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c\": container with ID starting with d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c not found: ID does not exist" containerID="d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.561069 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c"} err="failed to get container status \"d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c\": rpc error: code = NotFound desc = could not find container \"d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c\": container with ID starting with d9c56fefbaba94252283ddd88e0aa3a5b199a7e7452670b2a228b69979696c4c not found: ID does not exist" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.561120 4874 scope.go:117] "RemoveContainer" containerID="f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584" Feb 17 16:06:44 crc kubenswrapper[4874]: E0217 16:06:44.561416 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584\": container with ID starting with f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584 not found: ID does not exist" containerID="f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.561458 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584"} err="failed to get container status \"f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584\": rpc error: code = NotFound desc = could not find container \"f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584\": container with ID starting with f1b68850562dcf5c39cfc4552c58731c104a96f8078d2fd86c356b0c0e23d584 not found: ID does not exist" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.561488 4874 scope.go:117] "RemoveContainer" containerID="98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325" Feb 17 16:06:44 crc kubenswrapper[4874]: E0217 16:06:44.562002 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325\": container with ID starting with 98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325 not found: ID does not exist" containerID="98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325" Feb 17 16:06:44 crc kubenswrapper[4874]: I0217 16:06:44.562028 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325"} err="failed to get container status \"98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325\": rpc error: code = NotFound desc = could not find container \"98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325\": container with ID starting with 98cadca8fafd9d2af88a577e6fd81c1f24421c7c291648718c16967389ec9325 not found: ID does not exist" Feb 17 16:06:45 crc kubenswrapper[4874]: I0217 16:06:45.683142 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" podUID="45e5b8ae-4eef-4449-b844-574c3b737ad4" containerName="oauth-openshift" containerID="cri-o://f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1" gracePeriod=15 Feb 17 16:06:45 crc kubenswrapper[4874]: I0217 16:06:45.890980 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bgnxq"] Feb 17 16:06:45 crc kubenswrapper[4874]: I0217 16:06:45.892190 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bgnxq" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerName="registry-server" containerID="cri-o://6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200" gracePeriod=2 Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.166566 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.306601 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-router-certs\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.306684 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-idp-0-file-data\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.306733 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-error\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.306791 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-service-ca\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.306850 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-policies\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.307518 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.307610 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-ocp-branding-template\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.307640 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-login\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.308166 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.308318 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-serving-cert\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.308383 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-cliconfig\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.308420 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-trusted-ca-bundle\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.308463 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfpkg\" (UniqueName: \"kubernetes.io/projected/45e5b8ae-4eef-4449-b844-574c3b737ad4-kube-api-access-mfpkg\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.308559 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-provider-selection\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.308606 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-session\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.308643 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-dir\") pod \"45e5b8ae-4eef-4449-b844-574c3b737ad4\" (UID: \"45e5b8ae-4eef-4449-b844-574c3b737ad4\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.309136 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.309269 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.309262 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.309528 4874 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.309626 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.309655 4874 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.309674 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.309693 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.310387 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.310993 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.312037 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.312301 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.314344 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.314498 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e5b8ae-4eef-4449-b844-574c3b737ad4-kube-api-access-mfpkg" (OuterVolumeSpecName: "kube-api-access-mfpkg") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "kube-api-access-mfpkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.314835 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.315142 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.319651 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "45e5b8ae-4eef-4449-b844-574c3b737ad4" (UID: "45e5b8ae-4eef-4449-b844-574c3b737ad4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.342849 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.410725 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-utilities\") pod \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.410911 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zgdq\" (UniqueName: \"kubernetes.io/projected/065c00cb-7ec7-428e-a10a-aaf6335d63e1-kube-api-access-7zgdq\") pod \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.410990 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-catalog-content\") pod \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\" (UID: \"065c00cb-7ec7-428e-a10a-aaf6335d63e1\") " Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411331 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411401 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411424 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411445 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfpkg\" (UniqueName: \"kubernetes.io/projected/45e5b8ae-4eef-4449-b844-574c3b737ad4-kube-api-access-mfpkg\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411464 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411483 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411500 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411518 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411537 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45e5b8ae-4eef-4449-b844-574c3b737ad4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.411683 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-utilities" (OuterVolumeSpecName: "utilities") pod "065c00cb-7ec7-428e-a10a-aaf6335d63e1" (UID: "065c00cb-7ec7-428e-a10a-aaf6335d63e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.413341 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065c00cb-7ec7-428e-a10a-aaf6335d63e1-kube-api-access-7zgdq" (OuterVolumeSpecName: "kube-api-access-7zgdq") pod "065c00cb-7ec7-428e-a10a-aaf6335d63e1" (UID: "065c00cb-7ec7-428e-a10a-aaf6335d63e1"). InnerVolumeSpecName "kube-api-access-7zgdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.464948 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" path="/var/lib/kubelet/pods/52ea909f-1a30-4a49-9b48-d6a6135a4598/volumes" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.513464 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zgdq\" (UniqueName: \"kubernetes.io/projected/065c00cb-7ec7-428e-a10a-aaf6335d63e1-kube-api-access-7zgdq\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.513513 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.549723 4874 generic.go:334] "Generic (PLEG): container finished" podID="45e5b8ae-4eef-4449-b844-574c3b737ad4" containerID="f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1" exitCode=0 Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.549983 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.550232 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" event={"ID":"45e5b8ae-4eef-4449-b844-574c3b737ad4","Type":"ContainerDied","Data":"f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1"} Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.550314 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2kf8w" event={"ID":"45e5b8ae-4eef-4449-b844-574c3b737ad4","Type":"ContainerDied","Data":"457765ac7c16758373e820dd05ddad24dce18c202e85bd000d3b668c45e9ec34"} Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.550348 4874 scope.go:117] "RemoveContainer" containerID="f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.556646 4874 generic.go:334] "Generic (PLEG): container finished" podID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerID="6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200" exitCode=0 Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.556710 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgnxq" event={"ID":"065c00cb-7ec7-428e-a10a-aaf6335d63e1","Type":"ContainerDied","Data":"6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200"} Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.556762 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgnxq" event={"ID":"065c00cb-7ec7-428e-a10a-aaf6335d63e1","Type":"ContainerDied","Data":"64c03702b0918678f163a9c1ca9d6c6cbc0c0aac33498c83a92689f777cd7e9b"} Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.556894 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgnxq" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.565935 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "065c00cb-7ec7-428e-a10a-aaf6335d63e1" (UID: "065c00cb-7ec7-428e-a10a-aaf6335d63e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.577709 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2kf8w"] Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.580323 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2kf8w"] Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.580521 4874 scope.go:117] "RemoveContainer" containerID="f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.581166 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1\": container with ID starting with f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1 not found: ID does not exist" containerID="f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.581202 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1"} err="failed to get container status \"f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1\": rpc error: code = NotFound desc = could not find container \"f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1\": container with ID starting with f60c01603b08a4686820438f3bf1c32d6eadad5e3f94f11be66c131e08ba29c1 not found: ID does not exist" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.581223 4874 scope.go:117] "RemoveContainer" containerID="6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.602128 4874 scope.go:117] "RemoveContainer" containerID="25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.615188 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065c00cb-7ec7-428e-a10a-aaf6335d63e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.622511 4874 scope.go:117] "RemoveContainer" containerID="686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.647246 4874 scope.go:117] "RemoveContainer" containerID="6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.647962 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200\": container with ID starting with 6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200 not found: ID does not exist" containerID="6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.648003 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200"} err="failed to get container status \"6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200\": rpc error: code = NotFound desc = could not find container \"6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200\": container with ID starting with 6ff55795353a19c28fb5bbce7f8422b7aeac9d45f1c756a8bd68cc5522a26200 not found: ID does not exist" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.648044 4874 scope.go:117] "RemoveContainer" containerID="25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.648439 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783\": container with ID starting with 25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783 not found: ID does not exist" containerID="25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.648502 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783"} err="failed to get container status \"25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783\": rpc error: code = NotFound desc = could not find container \"25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783\": container with ID starting with 25d8dc88cecf465a79af054bd0ba19a930952323fba1746779a0cec28fa1f783 not found: ID does not exist" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.648537 4874 scope.go:117] "RemoveContainer" containerID="686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.648848 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16\": container with ID starting with 686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16 not found: ID does not exist" containerID="686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.648880 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16"} err="failed to get container status \"686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16\": rpc error: code = NotFound desc = could not find container \"686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16\": container with ID starting with 686cfb44a33b1fc42e6e76fcd7515a48754a3fb03b9340da0efe9a8d1afbfd16 not found: ID does not exist" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727313 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w"] Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727522 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerName="extract-utilities" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727533 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerName="extract-utilities" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727542 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerName="extract-utilities" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727548 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerName="extract-utilities" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727555 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727562 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727572 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerName="extract-content" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727579 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerName="extract-content" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727587 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e5b8ae-4eef-4449-b844-574c3b737ad4" containerName="oauth-openshift" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727593 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e5b8ae-4eef-4449-b844-574c3b737ad4" containerName="oauth-openshift" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727606 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727613 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727620 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerName="extract-utilities" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727626 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerName="extract-utilities" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727633 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerName="extract-utilities" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727639 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerName="extract-utilities" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727647 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerName="extract-content" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727653 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerName="extract-content" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727660 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerName="extract-content" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727680 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerName="extract-content" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727685 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727691 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727701 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727706 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: E0217 16:06:46.727715 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerName="extract-content" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727720 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerName="extract-content" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727799 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727810 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d464fc-2d1d-4a29-ae06-5d29503f6545" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727817 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e5b8ae-4eef-4449-b844-574c3b737ad4" containerName="oauth-openshift" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727824 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="777c2139-1b69-4526-b1a0-537c84c3fc02" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.727832 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ea909f-1a30-4a49-9b48-d6a6135a4598" containerName="registry-server" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.728193 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.734393 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.734393 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.735849 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.735885 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.736244 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.736478 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.736817 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.737024 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.737478 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.737625 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.738132 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.738171 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.747810 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.755953 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w"] Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.759637 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.765648 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818397 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-session\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818467 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818546 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-service-ca\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818593 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818628 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-template-error\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818663 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-audit-policies\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818701 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-template-login\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818749 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-audit-dir\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818779 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818813 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818848 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-router-certs\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818887 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbktb\" (UniqueName: \"kubernetes.io/projected/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-kube-api-access-vbktb\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818921 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.818957 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.886649 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bgnxq"] Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.888865 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bgnxq"] Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.919959 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbktb\" (UniqueName: \"kubernetes.io/projected/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-kube-api-access-vbktb\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920016 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920042 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920102 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-session\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920121 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920803 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-service-ca\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920831 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920852 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-template-error\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920880 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-audit-policies\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920904 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-template-login\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920925 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-audit-dir\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920942 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920959 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-router-certs\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.920973 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.921199 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.921295 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-audit-dir\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.921650 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.922235 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-audit-policies\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.923330 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-service-ca\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.924854 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.927599 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.928677 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-template-login\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.935983 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.936380 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-router-certs\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.937616 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-user-template-error\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.938258 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-session\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.938551 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:46 crc kubenswrapper[4874]: I0217 16:06:46.940606 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbktb\" (UniqueName: \"kubernetes.io/projected/eee6f3d2-0884-48b5-89d7-98cfc97cb92b-kube-api-access-vbktb\") pod \"oauth-openshift-75c5cdcdb8-qks8w\" (UID: \"eee6f3d2-0884-48b5-89d7-98cfc97cb92b\") " pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:47 crc kubenswrapper[4874]: I0217 16:06:47.059005 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:47 crc kubenswrapper[4874]: I0217 16:06:47.478119 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w"] Feb 17 16:06:47 crc kubenswrapper[4874]: W0217 16:06:47.482627 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeee6f3d2_0884_48b5_89d7_98cfc97cb92b.slice/crio-c58c6bdcc0a86a92f3dee0a06d0a9ad09d94c485b36045f7f9f005e79c76da75 WatchSource:0}: Error finding container c58c6bdcc0a86a92f3dee0a06d0a9ad09d94c485b36045f7f9f005e79c76da75: Status 404 returned error can't find the container with id c58c6bdcc0a86a92f3dee0a06d0a9ad09d94c485b36045f7f9f005e79c76da75 Feb 17 16:06:47 crc kubenswrapper[4874]: I0217 16:06:47.563254 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" event={"ID":"eee6f3d2-0884-48b5-89d7-98cfc97cb92b","Type":"ContainerStarted","Data":"c58c6bdcc0a86a92f3dee0a06d0a9ad09d94c485b36045f7f9f005e79c76da75"} Feb 17 16:06:48 crc kubenswrapper[4874]: I0217 16:06:48.472581 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065c00cb-7ec7-428e-a10a-aaf6335d63e1" path="/var/lib/kubelet/pods/065c00cb-7ec7-428e-a10a-aaf6335d63e1/volumes" Feb 17 16:06:48 crc kubenswrapper[4874]: I0217 16:06:48.474599 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e5b8ae-4eef-4449-b844-574c3b737ad4" path="/var/lib/kubelet/pods/45e5b8ae-4eef-4449-b844-574c3b737ad4/volumes" Feb 17 16:06:48 crc kubenswrapper[4874]: I0217 16:06:48.571516 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" event={"ID":"eee6f3d2-0884-48b5-89d7-98cfc97cb92b","Type":"ContainerStarted","Data":"4d944310daf405cfd0f1643e4301989224cf37fa21c66d958c5303f6bd367a94"} Feb 17 16:06:48 crc kubenswrapper[4874]: I0217 16:06:48.571980 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:48 crc kubenswrapper[4874]: I0217 16:06:48.578104 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" Feb 17 16:06:48 crc kubenswrapper[4874]: I0217 16:06:48.594249 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-75c5cdcdb8-qks8w" podStartSLOduration=28.594231143000002 podStartE2EDuration="28.594231143s" podCreationTimestamp="2026-02-17 16:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:48.589994749 +0000 UTC m=+218.884383310" watchObservedRunningTime="2026-02-17 16:06:48.594231143 +0000 UTC m=+218.888619724" Feb 17 16:06:57 crc kubenswrapper[4874]: I0217 16:06:57.724187 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:06:57 crc kubenswrapper[4874]: I0217 16:06:57.725360 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:06:57 crc kubenswrapper[4874]: I0217 16:06:57.725444 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:06:57 crc kubenswrapper[4874]: I0217 16:06:57.726189 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:06:57 crc kubenswrapper[4874]: I0217 16:06:57.726316 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141" gracePeriod=600 Feb 17 16:06:58 crc kubenswrapper[4874]: I0217 16:06:58.856689 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141" exitCode=0 Feb 17 16:06:58 crc kubenswrapper[4874]: I0217 16:06:58.856805 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141"} Feb 17 16:06:58 crc kubenswrapper[4874]: I0217 16:06:58.857378 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"bcc2c8959b8db77cff7d7da089aecb1677ca5a83553a4d055beb1eed3bde2fdd"} Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.969510 4874 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.970771 4874 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.970901 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971257 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd" gracePeriod=15 Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971298 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a" gracePeriod=15 Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971351 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732" gracePeriod=15 Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971357 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a" gracePeriod=15 Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971198 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681" gracePeriod=15 Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971580 4874 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:07:07 crc kubenswrapper[4874]: E0217 16:07:07.971743 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971761 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:07 crc kubenswrapper[4874]: E0217 16:07:07.971773 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971781 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 16:07:07 crc kubenswrapper[4874]: E0217 16:07:07.971792 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971801 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 16:07:07 crc kubenswrapper[4874]: E0217 16:07:07.971810 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971818 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 16:07:07 crc kubenswrapper[4874]: E0217 16:07:07.971828 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971836 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 16:07:07 crc kubenswrapper[4874]: E0217 16:07:07.971849 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971858 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:07 crc kubenswrapper[4874]: E0217 16:07:07.971870 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.971879 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.972001 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.972015 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.972027 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.972037 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.972047 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 16:07:07 crc kubenswrapper[4874]: I0217 16:07:07.972060 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.081710 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.081935 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.081990 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.082116 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.082262 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.082319 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.082359 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.082406 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.183945 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.183997 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184016 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184034 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184052 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184106 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184120 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184145 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184156 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184191 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184222 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184226 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184203 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184239 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184191 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.184206 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.910861 4874 generic.go:334] "Generic (PLEG): container finished" podID="3c11461e-921f-46b7-ba51-5299829f22f1" containerID="049d05057fcf817d859b3aa4a0081c86eb24a82a9fec7de7c5ce24e7ab464798" exitCode=0 Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.911136 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3c11461e-921f-46b7-ba51-5299829f22f1","Type":"ContainerDied","Data":"049d05057fcf817d859b3aa4a0081c86eb24a82a9fec7de7c5ce24e7ab464798"} Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.911816 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.913712 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.915238 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.915996 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732" exitCode=0 Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.916024 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd" exitCode=0 Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.916035 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a" exitCode=0 Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.916044 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a" exitCode=2 Feb 17 16:07:08 crc kubenswrapper[4874]: I0217 16:07:08.916086 4874 scope.go:117] "RemoveContainer" containerID="41bcb2ddda9f4b932cd7a1557090b0bf5612897da416857e36a55e0a9abc4cab" Feb 17 16:07:09 crc kubenswrapper[4874]: I0217 16:07:09.925903 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.281815 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.282860 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.419489 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c11461e-921f-46b7-ba51-5299829f22f1-kube-api-access\") pod \"3c11461e-921f-46b7-ba51-5299829f22f1\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.419791 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-kubelet-dir\") pod \"3c11461e-921f-46b7-ba51-5299829f22f1\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.419814 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-var-lock\") pod \"3c11461e-921f-46b7-ba51-5299829f22f1\" (UID: \"3c11461e-921f-46b7-ba51-5299829f22f1\") " Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.420122 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-var-lock" (OuterVolumeSpecName: "var-lock") pod "3c11461e-921f-46b7-ba51-5299829f22f1" (UID: "3c11461e-921f-46b7-ba51-5299829f22f1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.420317 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3c11461e-921f-46b7-ba51-5299829f22f1" (UID: "3c11461e-921f-46b7-ba51-5299829f22f1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.436858 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c11461e-921f-46b7-ba51-5299829f22f1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3c11461e-921f-46b7-ba51-5299829f22f1" (UID: "3c11461e-921f-46b7-ba51-5299829f22f1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.467176 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.521050 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c11461e-921f-46b7-ba51-5299829f22f1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.521096 4874 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.521107 4874 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c11461e-921f-46b7-ba51-5299829f22f1-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:10 crc kubenswrapper[4874]: E0217 16:07:10.633876 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: E0217 16:07:10.634232 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: E0217 16:07:10.634567 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: E0217 16:07:10.635105 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: E0217 16:07:10.635625 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.635697 4874 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 16:07:10 crc kubenswrapper[4874]: E0217 16:07:10.636123 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="200ms" Feb 17 16:07:10 crc kubenswrapper[4874]: E0217 16:07:10.837583 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="400ms" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.847360 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.848208 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.848865 4874 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.849244 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.933343 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3c11461e-921f-46b7-ba51-5299829f22f1","Type":"ContainerDied","Data":"becc8ad735f165959f7ea9a191d9a0b0afcfebca46bd0523791b99e9617b5623"} Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.933373 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.933392 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="becc8ad735f165959f7ea9a191d9a0b0afcfebca46bd0523791b99e9617b5623" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.936256 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.937054 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681" exitCode=0 Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.937127 4874 scope.go:117] "RemoveContainer" containerID="beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.937140 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.937146 4874 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.937487 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.951943 4874 scope.go:117] "RemoveContainer" containerID="ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.964858 4874 scope.go:117] "RemoveContainer" containerID="695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.978579 4874 scope.go:117] "RemoveContainer" containerID="f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a" Feb 17 16:07:10 crc kubenswrapper[4874]: I0217 16:07:10.994964 4874 scope.go:117] "RemoveContainer" containerID="e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.010694 4874 scope.go:117] "RemoveContainer" containerID="574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.026667 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.026787 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.026823 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.026916 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.026931 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.027045 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.027470 4874 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.027508 4874 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.027522 4874 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.028715 4874 scope.go:117] "RemoveContainer" containerID="beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732" Feb 17 16:07:11 crc kubenswrapper[4874]: E0217 16:07:11.029129 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\": container with ID starting with beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732 not found: ID does not exist" containerID="beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.029173 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732"} err="failed to get container status \"beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\": rpc error: code = NotFound desc = could not find container \"beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732\": container with ID starting with beb2046114c1d47a7c776a2880f3f2d6c648fef49cdd794502d42da30f1bb732 not found: ID does not exist" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.029205 4874 scope.go:117] "RemoveContainer" containerID="ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd" Feb 17 16:07:11 crc kubenswrapper[4874]: E0217 16:07:11.029532 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\": container with ID starting with ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd not found: ID does not exist" containerID="ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.029567 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd"} err="failed to get container status \"ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\": rpc error: code = NotFound desc = could not find container \"ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd\": container with ID starting with ef276c9a361e06c2254517d556b1f82079a3c4fa28cf37b5c640544a086c39bd not found: ID does not exist" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.029594 4874 scope.go:117] "RemoveContainer" containerID="695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a" Feb 17 16:07:11 crc kubenswrapper[4874]: E0217 16:07:11.029867 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\": container with ID starting with 695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a not found: ID does not exist" containerID="695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.029903 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a"} err="failed to get container status \"695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\": rpc error: code = NotFound desc = could not find container \"695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a\": container with ID starting with 695507b0b601634647741bde352604f585f4025fafffd2f4b2b7fccf4524c88a not found: ID does not exist" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.029922 4874 scope.go:117] "RemoveContainer" containerID="f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a" Feb 17 16:07:11 crc kubenswrapper[4874]: E0217 16:07:11.030430 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\": container with ID starting with f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a not found: ID does not exist" containerID="f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.030461 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a"} err="failed to get container status \"f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\": rpc error: code = NotFound desc = could not find container \"f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a\": container with ID starting with f2f1c712673559e9dd3684670a2a86566cdd07bbfbb6ce07da3471530e6bf03a not found: ID does not exist" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.030480 4874 scope.go:117] "RemoveContainer" containerID="e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681" Feb 17 16:07:11 crc kubenswrapper[4874]: E0217 16:07:11.030784 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\": container with ID starting with e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681 not found: ID does not exist" containerID="e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.031354 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681"} err="failed to get container status \"e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\": rpc error: code = NotFound desc = could not find container \"e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681\": container with ID starting with e74a377088fb5830611061e87cf0ac8be716dc93e7b08ef9234de3f18bdea681 not found: ID does not exist" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.031405 4874 scope.go:117] "RemoveContainer" containerID="574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3" Feb 17 16:07:11 crc kubenswrapper[4874]: E0217 16:07:11.031734 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\": container with ID starting with 574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3 not found: ID does not exist" containerID="574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.031768 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3"} err="failed to get container status \"574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\": rpc error: code = NotFound desc = could not find container \"574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3\": container with ID starting with 574141d712970bbfb3b6869a0611c90b7bd2bb8c7e658ff24c544c6fe10078f3 not found: ID does not exist" Feb 17 16:07:11 crc kubenswrapper[4874]: E0217 16:07:11.238556 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="800ms" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.262866 4874 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:11 crc kubenswrapper[4874]: I0217 16:07:11.263168 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4874]: E0217 16:07:12.039986 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="1.6s" Feb 17 16:07:12 crc kubenswrapper[4874]: I0217 16:07:12.467691 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 16:07:12 crc kubenswrapper[4874]: E0217 16:07:12.678141 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod3c11461e_921f_46b7_ba51_5299829f22f1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:07:13 crc kubenswrapper[4874]: E0217 16:07:13.022864 4874 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:13 crc kubenswrapper[4874]: I0217 16:07:13.023342 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:13 crc kubenswrapper[4874]: W0217 16:07:13.045935 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-aed1aa84c4bb065b3a362821e07ad546a1c8a3b3b64972bd934e058ce87ba059 WatchSource:0}: Error finding container aed1aa84c4bb065b3a362821e07ad546a1c8a3b3b64972bd934e058ce87ba059: Status 404 returned error can't find the container with id aed1aa84c4bb065b3a362821e07ad546a1c8a3b3b64972bd934e058ce87ba059 Feb 17 16:07:13 crc kubenswrapper[4874]: E0217 16:07:13.048590 4874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189514607ab18d88 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 16:07:13.0480306 +0000 UTC m=+243.342419161,LastTimestamp:2026-02-17 16:07:13.0480306 +0000 UTC m=+243.342419161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 16:07:13 crc kubenswrapper[4874]: E0217 16:07:13.640925 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="3.2s" Feb 17 16:07:13 crc kubenswrapper[4874]: I0217 16:07:13.955326 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55"} Feb 17 16:07:13 crc kubenswrapper[4874]: I0217 16:07:13.955377 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"aed1aa84c4bb065b3a362821e07ad546a1c8a3b3b64972bd934e058ce87ba059"} Feb 17 16:07:13 crc kubenswrapper[4874]: E0217 16:07:13.955906 4874 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:13 crc kubenswrapper[4874]: I0217 16:07:13.955995 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:16 crc kubenswrapper[4874]: E0217 16:07:16.476935 4874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189514607ab18d88 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 16:07:13.0480306 +0000 UTC m=+243.342419161,LastTimestamp:2026-02-17 16:07:13.0480306 +0000 UTC m=+243.342419161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 16:07:16 crc kubenswrapper[4874]: E0217 16:07:16.843259 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="6.4s" Feb 17 16:07:20 crc kubenswrapper[4874]: I0217 16:07:20.459177 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.456999 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.457928 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.479332 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.479369 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:22 crc kubenswrapper[4874]: E0217 16:07:22.480023 4874 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.480756 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.536328 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c5904a0af93a4f1458004ccc7b83b319250edf46504f926a913473fdcd7b60ba"} Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.539700 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.539771 4874 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34" exitCode=1 Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.539813 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34"} Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.540520 4874 scope.go:117] "RemoveContainer" containerID="03cdd03d2129e6d94c8faed3b95a896c4dfe25c6b6f7e61d1027ae973c341f34" Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.540707 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:22 crc kubenswrapper[4874]: I0217 16:07:22.541314 4874 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:22 crc kubenswrapper[4874]: E0217 16:07:22.795544 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod3c11461e_921f_46b7_ba51_5299829f22f1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:07:23 crc kubenswrapper[4874]: E0217 16:07:23.244647 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="7s" Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.546883 4874 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5b9c9ceab610999603c7c6b06439a737722f8374748b0e974e866acd41c4d0ed" exitCode=0 Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.546939 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5b9c9ceab610999603c7c6b06439a737722f8374748b0e974e866acd41c4d0ed"} Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.547239 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.547263 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:23 crc kubenswrapper[4874]: E0217 16:07:23.547562 4874 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.547612 4874 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.548056 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.559648 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.559706 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6223f167f7efacda6236f6d76d38957412c08b2429fcedf58f7837a23e226037"} Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.560487 4874 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:23 crc kubenswrapper[4874]: I0217 16:07:23.560990 4874 status_manager.go:851] "Failed to get status for pod" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 17 16:07:24 crc kubenswrapper[4874]: I0217 16:07:24.570489 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a50e68b3d461e1bda7767a8a03d467601407282a2c259cef3571115b68eb3031"} Feb 17 16:07:24 crc kubenswrapper[4874]: I0217 16:07:24.570857 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2b62ecb9891f0ac207b3a028e076929f7a87658bd1503ad9c5ba0741eab35e75"} Feb 17 16:07:24 crc kubenswrapper[4874]: I0217 16:07:24.570877 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5282bd3f365fae2f92060b77d92b80cd72587c0dfd23f8817f1e6098b5685dcf"} Feb 17 16:07:24 crc kubenswrapper[4874]: I0217 16:07:24.859151 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:07:25 crc kubenswrapper[4874]: I0217 16:07:25.579450 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"423ed0cd6c50ccbf1da441ef9d23b996aec81b1615e2b8fbebf12f1c07d27f73"} Feb 17 16:07:25 crc kubenswrapper[4874]: I0217 16:07:25.579505 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"320141f6539a9de5cb5baccb7fa8eb3e44bbd940f8cf55191f6a9385ef188caa"} Feb 17 16:07:25 crc kubenswrapper[4874]: I0217 16:07:25.579857 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:25 crc kubenswrapper[4874]: I0217 16:07:25.579886 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:25 crc kubenswrapper[4874]: I0217 16:07:25.580228 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:27 crc kubenswrapper[4874]: I0217 16:07:27.480935 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:27 crc kubenswrapper[4874]: I0217 16:07:27.481383 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:27 crc kubenswrapper[4874]: I0217 16:07:27.489706 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:30 crc kubenswrapper[4874]: I0217 16:07:30.593441 4874 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:30 crc kubenswrapper[4874]: I0217 16:07:30.628520 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:30 crc kubenswrapper[4874]: I0217 16:07:30.628559 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:30 crc kubenswrapper[4874]: I0217 16:07:30.635716 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:30 crc kubenswrapper[4874]: I0217 16:07:30.676242 4874 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f801ffea-07d2-4a6d-8be6-f091f81d7cc9" Feb 17 16:07:31 crc kubenswrapper[4874]: I0217 16:07:31.634580 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:31 crc kubenswrapper[4874]: I0217 16:07:31.634638 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:31 crc kubenswrapper[4874]: I0217 16:07:31.637305 4874 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f801ffea-07d2-4a6d-8be6-f091f81d7cc9" Feb 17 16:07:32 crc kubenswrapper[4874]: I0217 16:07:32.026110 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:07:32 crc kubenswrapper[4874]: I0217 16:07:32.034473 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:07:32 crc kubenswrapper[4874]: I0217 16:07:32.672165 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:07:32 crc kubenswrapper[4874]: E0217 16:07:32.932197 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod3c11461e_921f_46b7_ba51_5299829f22f1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:07:39 crc kubenswrapper[4874]: I0217 16:07:39.819140 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 16:07:39 crc kubenswrapper[4874]: I0217 16:07:39.839248 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 16:07:39 crc kubenswrapper[4874]: I0217 16:07:39.885030 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 16:07:40 crc kubenswrapper[4874]: I0217 16:07:40.066253 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 16:07:40 crc kubenswrapper[4874]: I0217 16:07:40.195665 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 16:07:40 crc kubenswrapper[4874]: I0217 16:07:40.657494 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 16:07:40 crc kubenswrapper[4874]: I0217 16:07:40.951230 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:07:41 crc kubenswrapper[4874]: I0217 16:07:41.446223 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 16:07:41 crc kubenswrapper[4874]: I0217 16:07:41.491344 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 16:07:41 crc kubenswrapper[4874]: I0217 16:07:41.617206 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 16:07:41 crc kubenswrapper[4874]: I0217 16:07:41.954190 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 16:07:42 crc kubenswrapper[4874]: I0217 16:07:42.053053 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 16:07:42 crc kubenswrapper[4874]: I0217 16:07:42.292155 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 16:07:42 crc kubenswrapper[4874]: I0217 16:07:42.489288 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 16:07:42 crc kubenswrapper[4874]: I0217 16:07:42.592574 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:07:42 crc kubenswrapper[4874]: I0217 16:07:42.700626 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 16:07:42 crc kubenswrapper[4874]: I0217 16:07:42.764680 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 16:07:42 crc kubenswrapper[4874]: I0217 16:07:42.795920 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 16:07:42 crc kubenswrapper[4874]: I0217 16:07:42.839347 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 16:07:42 crc kubenswrapper[4874]: I0217 16:07:42.930162 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.010774 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4874]: E0217 16:07:43.069403 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod3c11461e_921f_46b7_ba51_5299829f22f1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.108888 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.323197 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.336551 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.401563 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.497446 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.563783 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.618270 4874 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.619926 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.623255 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.623320 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.623345 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jqfd6","openshift-marketplace/certified-operators-xtlqz","openshift-marketplace/redhat-marketplace-v8fn8","openshift-marketplace/community-operators-kdr2g","openshift-marketplace/marketplace-operator-79b997595-2w9mt"] Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.623747 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.623781 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8b56fa93-1e5d-4786-a935-dd3c1c945e91" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.623862 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xtlqz" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" containerName="registry-server" containerID="cri-o://4d5af2bfd9bee7f08d882406c041affaaaf9ff1fdac2beceab3f67614c4ce19a" gracePeriod=30 Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.624168 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jqfd6" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerName="registry-server" containerID="cri-o://c89756710e2ee5671bec2aa80f0b2ec84ae91844345cb4c7b752c53fb9ad0a8e" gracePeriod=30 Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.624335 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v8fn8" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerName="registry-server" containerID="cri-o://ead35cbcc3b00d503e8db4945db4ec37c2ced571ce78f6dcce00c853db3f1e19" gracePeriod=30 Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.624477 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" podUID="6c21c3a4-9603-4cd0-a5e3-263aa51d678d" containerName="marketplace-operator" containerID="cri-o://422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7" gracePeriod=30 Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.624565 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kdr2g" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerName="registry-server" containerID="cri-o://b6d43dd1e78a034d756068b939b3582ab72eb7cfa2d37eb0e3597c8674a10c71" gracePeriod=30 Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.654661 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.724624 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.741737 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.766990 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.7669673 podStartE2EDuration="13.7669673s" podCreationTimestamp="2026-02-17 16:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:07:43.651483627 +0000 UTC m=+273.945872198" watchObservedRunningTime="2026-02-17 16:07:43.7669673 +0000 UTC m=+274.061355891" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.785372 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.829790 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.874478 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 16:07:43 crc kubenswrapper[4874]: I0217 16:07:43.999910 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.028906 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.038086 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.122091 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.136974 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.197088 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.199763 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.367093 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.528736 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.533068 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.572922 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.576314 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.576392 4874 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.638781 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.668194 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.670129 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.698611 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-trusted-ca\") pod \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.698667 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gskds\" (UniqueName: \"kubernetes.io/projected/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-kube-api-access-gskds\") pod \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.698874 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-operator-metrics\") pod \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\" (UID: \"6c21c3a4-9603-4cd0-a5e3-263aa51d678d\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.699843 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "6c21c3a4-9603-4cd0-a5e3-263aa51d678d" (UID: "6c21c3a4-9603-4cd0-a5e3-263aa51d678d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.702552 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.708030 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "6c21c3a4-9603-4cd0-a5e3-263aa51d678d" (UID: "6c21c3a4-9603-4cd0-a5e3-263aa51d678d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.708673 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-kube-api-access-gskds" (OuterVolumeSpecName: "kube-api-access-gskds") pod "6c21c3a4-9603-4cd0-a5e3-263aa51d678d" (UID: "6c21c3a4-9603-4cd0-a5e3-263aa51d678d"). InnerVolumeSpecName "kube-api-access-gskds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.728265 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.742503 4874 generic.go:334] "Generic (PLEG): container finished" podID="28be448a-a2cb-4731-85fa-ec01026d5763" containerID="4d5af2bfd9bee7f08d882406c041affaaaf9ff1fdac2beceab3f67614c4ce19a" exitCode=0 Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.742562 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtlqz" event={"ID":"28be448a-a2cb-4731-85fa-ec01026d5763","Type":"ContainerDied","Data":"4d5af2bfd9bee7f08d882406c041affaaaf9ff1fdac2beceab3f67614c4ce19a"} Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.744474 4874 generic.go:334] "Generic (PLEG): container finished" podID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerID="ead35cbcc3b00d503e8db4945db4ec37c2ced571ce78f6dcce00c853db3f1e19" exitCode=0 Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.744517 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fn8" event={"ID":"cfc01af4-cec4-4d66-b673-ac10e1797059","Type":"ContainerDied","Data":"ead35cbcc3b00d503e8db4945db4ec37c2ced571ce78f6dcce00c853db3f1e19"} Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.746460 4874 generic.go:334] "Generic (PLEG): container finished" podID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerID="b6d43dd1e78a034d756068b939b3582ab72eb7cfa2d37eb0e3597c8674a10c71" exitCode=0 Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.746494 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdr2g" event={"ID":"52637c6d-7cd3-4761-b70d-4e07e68a6c5d","Type":"ContainerDied","Data":"b6d43dd1e78a034d756068b939b3582ab72eb7cfa2d37eb0e3597c8674a10c71"} Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.749147 4874 generic.go:334] "Generic (PLEG): container finished" podID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerID="c89756710e2ee5671bec2aa80f0b2ec84ae91844345cb4c7b752c53fb9ad0a8e" exitCode=0 Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.749198 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqfd6" event={"ID":"19397da4-8b1f-4ec8-969c-2856e64112fc","Type":"ContainerDied","Data":"c89756710e2ee5671bec2aa80f0b2ec84ae91844345cb4c7b752c53fb9ad0a8e"} Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.762540 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.763425 4874 generic.go:334] "Generic (PLEG): container finished" podID="6c21c3a4-9603-4cd0-a5e3-263aa51d678d" containerID="422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7" exitCode=0 Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.764291 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.764381 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" event={"ID":"6c21c3a4-9603-4cd0-a5e3-263aa51d678d","Type":"ContainerDied","Data":"422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7"} Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.764448 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2w9mt" event={"ID":"6c21c3a4-9603-4cd0-a5e3-263aa51d678d","Type":"ContainerDied","Data":"6a23b3ab0783415a98358155cec6285294e8fc21608480b693895b2eab18a251"} Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.764480 4874 scope.go:117] "RemoveContainer" containerID="422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.768715 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.793884 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.794557 4874 scope.go:117] "RemoveContainer" containerID="422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7" Feb 17 16:07:44 crc kubenswrapper[4874]: E0217 16:07:44.794942 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7\": container with ID starting with 422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7 not found: ID does not exist" containerID="422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.794975 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7"} err="failed to get container status \"422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7\": rpc error: code = NotFound desc = could not find container \"422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7\": container with ID starting with 422caac02feae3768bfeba3e03c8bc3434c85fbc6702d1411cf7bab933e432c7 not found: ID does not exist" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.802128 4874 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.802158 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gskds\" (UniqueName: \"kubernetes.io/projected/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-kube-api-access-gskds\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.802171 4874 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c21c3a4-9603-4cd0-a5e3-263aa51d678d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.804162 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2w9mt"] Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.807823 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2w9mt"] Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.809441 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.836900 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.839885 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.850730 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.881416 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.887103 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903479 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-utilities\") pod \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903560 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-utilities\") pod \"cfc01af4-cec4-4d66-b673-ac10e1797059\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903597 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wknqt\" (UniqueName: \"kubernetes.io/projected/28be448a-a2cb-4731-85fa-ec01026d5763-kube-api-access-wknqt\") pod \"28be448a-a2cb-4731-85fa-ec01026d5763\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903624 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8glj\" (UniqueName: \"kubernetes.io/projected/19397da4-8b1f-4ec8-969c-2856e64112fc-kube-api-access-z8glj\") pod \"19397da4-8b1f-4ec8-969c-2856e64112fc\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903653 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxjbz\" (UniqueName: \"kubernetes.io/projected/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-kube-api-access-fxjbz\") pod \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903713 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-catalog-content\") pod \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\" (UID: \"52637c6d-7cd3-4761-b70d-4e07e68a6c5d\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903749 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgkq2\" (UniqueName: \"kubernetes.io/projected/cfc01af4-cec4-4d66-b673-ac10e1797059-kube-api-access-rgkq2\") pod \"cfc01af4-cec4-4d66-b673-ac10e1797059\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903781 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-utilities\") pod \"19397da4-8b1f-4ec8-969c-2856e64112fc\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903819 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-catalog-content\") pod \"19397da4-8b1f-4ec8-969c-2856e64112fc\" (UID: \"19397da4-8b1f-4ec8-969c-2856e64112fc\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903852 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-catalog-content\") pod \"cfc01af4-cec4-4d66-b673-ac10e1797059\" (UID: \"cfc01af4-cec4-4d66-b673-ac10e1797059\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.903973 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-catalog-content\") pod \"28be448a-a2cb-4731-85fa-ec01026d5763\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.904006 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-utilities\") pod \"28be448a-a2cb-4731-85fa-ec01026d5763\" (UID: \"28be448a-a2cb-4731-85fa-ec01026d5763\") " Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.905250 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-utilities" (OuterVolumeSpecName: "utilities") pod "cfc01af4-cec4-4d66-b673-ac10e1797059" (UID: "cfc01af4-cec4-4d66-b673-ac10e1797059"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.905306 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-utilities" (OuterVolumeSpecName: "utilities") pod "28be448a-a2cb-4731-85fa-ec01026d5763" (UID: "28be448a-a2cb-4731-85fa-ec01026d5763"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.905534 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-utilities" (OuterVolumeSpecName: "utilities") pod "52637c6d-7cd3-4761-b70d-4e07e68a6c5d" (UID: "52637c6d-7cd3-4761-b70d-4e07e68a6c5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.908248 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28be448a-a2cb-4731-85fa-ec01026d5763-kube-api-access-wknqt" (OuterVolumeSpecName: "kube-api-access-wknqt") pod "28be448a-a2cb-4731-85fa-ec01026d5763" (UID: "28be448a-a2cb-4731-85fa-ec01026d5763"). InnerVolumeSpecName "kube-api-access-wknqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.909103 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19397da4-8b1f-4ec8-969c-2856e64112fc-kube-api-access-z8glj" (OuterVolumeSpecName: "kube-api-access-z8glj") pod "19397da4-8b1f-4ec8-969c-2856e64112fc" (UID: "19397da4-8b1f-4ec8-969c-2856e64112fc"). InnerVolumeSpecName "kube-api-access-z8glj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.917500 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-kube-api-access-fxjbz" (OuterVolumeSpecName: "kube-api-access-fxjbz") pod "52637c6d-7cd3-4761-b70d-4e07e68a6c5d" (UID: "52637c6d-7cd3-4761-b70d-4e07e68a6c5d"). InnerVolumeSpecName "kube-api-access-fxjbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.920021 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc01af4-cec4-4d66-b673-ac10e1797059-kube-api-access-rgkq2" (OuterVolumeSpecName: "kube-api-access-rgkq2") pod "cfc01af4-cec4-4d66-b673-ac10e1797059" (UID: "cfc01af4-cec4-4d66-b673-ac10e1797059"). InnerVolumeSpecName "kube-api-access-rgkq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.924902 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-utilities" (OuterVolumeSpecName: "utilities") pod "19397da4-8b1f-4ec8-969c-2856e64112fc" (UID: "19397da4-8b1f-4ec8-969c-2856e64112fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.930052 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfc01af4-cec4-4d66-b673-ac10e1797059" (UID: "cfc01af4-cec4-4d66-b673-ac10e1797059"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.958519 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28be448a-a2cb-4731-85fa-ec01026d5763" (UID: "28be448a-a2cb-4731-85fa-ec01026d5763"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.968657 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4874]: I0217 16:07:44.975228 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52637c6d-7cd3-4761-b70d-4e07e68a6c5d" (UID: "52637c6d-7cd3-4761-b70d-4e07e68a6c5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006168 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006551 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgkq2\" (UniqueName: \"kubernetes.io/projected/cfc01af4-cec4-4d66-b673-ac10e1797059-kube-api-access-rgkq2\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006575 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006594 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006611 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006628 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28be448a-a2cb-4731-85fa-ec01026d5763-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006645 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006662 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc01af4-cec4-4d66-b673-ac10e1797059-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006678 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wknqt\" (UniqueName: \"kubernetes.io/projected/28be448a-a2cb-4731-85fa-ec01026d5763-kube-api-access-wknqt\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006694 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8glj\" (UniqueName: \"kubernetes.io/projected/19397da4-8b1f-4ec8-969c-2856e64112fc-kube-api-access-z8glj\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.006711 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxjbz\" (UniqueName: \"kubernetes.io/projected/52637c6d-7cd3-4761-b70d-4e07e68a6c5d-kube-api-access-fxjbz\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.041303 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.048037 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19397da4-8b1f-4ec8-969c-2856e64112fc" (UID: "19397da4-8b1f-4ec8-969c-2856e64112fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.107889 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19397da4-8b1f-4ec8-969c-2856e64112fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.109468 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.170168 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.201021 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.252891 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.255119 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.259147 4874 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.348860 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.401439 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.429486 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.474407 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.496362 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.583221 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.675408 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.710024 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.715948 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.735146 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.773206 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtlqz" event={"ID":"28be448a-a2cb-4731-85fa-ec01026d5763","Type":"ContainerDied","Data":"2eacc1d7c9e3d11a678218e7067e1cda64da8dc57155b32c97e9c4b2992d6451"} Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.773274 4874 scope.go:117] "RemoveContainer" containerID="4d5af2bfd9bee7f08d882406c041affaaaf9ff1fdac2beceab3f67614c4ce19a" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.773307 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtlqz" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.777323 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fn8" event={"ID":"cfc01af4-cec4-4d66-b673-ac10e1797059","Type":"ContainerDied","Data":"906651edcdef0a0a3de0ee8d2b27872818537fecbe7e864e8e0e95ae201a8a23"} Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.777347 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8fn8" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.780615 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kdr2g" event={"ID":"52637c6d-7cd3-4761-b70d-4e07e68a6c5d","Type":"ContainerDied","Data":"2c23c35d40e4242af8cea880ea1b0e3a6af8722d659934702e9a0deb157c233d"} Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.780696 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kdr2g" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.784442 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jqfd6" event={"ID":"19397da4-8b1f-4ec8-969c-2856e64112fc","Type":"ContainerDied","Data":"a7a9883ca17a3726d68d3d6125b805d52ff0d5a5c046f96000ec05186d3e0d94"} Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.784555 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jqfd6" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.796956 4874 scope.go:117] "RemoveContainer" containerID="9eca76966b8ffd2698c60064256c8742a2a3349e89271df95a4420d27c617862" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.810420 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.828375 4874 scope.go:117] "RemoveContainer" containerID="c9603ba3e4a2c6990d1af368653c0ee89e0587c608b7e93821b53856fd4cf9e3" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.866113 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xtlqz"] Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.870622 4874 scope.go:117] "RemoveContainer" containerID="ead35cbcc3b00d503e8db4945db4ec37c2ced571ce78f6dcce00c853db3f1e19" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.872180 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xtlqz"] Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.879964 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jqfd6"] Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.887682 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jqfd6"] Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.889561 4874 scope.go:117] "RemoveContainer" containerID="deead2b5fc206fb55b2908a4e55c1d803f9e3da51e804bb155e261a53188f981" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.893220 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fn8"] Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.901212 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fn8"] Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.907855 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kdr2g"] Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.907889 4874 scope.go:117] "RemoveContainer" containerID="9e3e7317a8c0865a88b2978f6d72be1278e0604a1330bb6d994c41fd881517de" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.914891 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kdr2g"] Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.922517 4874 scope.go:117] "RemoveContainer" containerID="b6d43dd1e78a034d756068b939b3582ab72eb7cfa2d37eb0e3597c8674a10c71" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.933224 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.939853 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.945262 4874 scope.go:117] "RemoveContainer" containerID="72885fee2c84856c40c7d3fba597566c6aff0abdae36ea092e648364e7243850" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.958449 4874 scope.go:117] "RemoveContainer" containerID="f64930d4beeef1982ac81c1558c1cde325beeaf272a4b69bdc2e49072b553bbf" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.977925 4874 scope.go:117] "RemoveContainer" containerID="c89756710e2ee5671bec2aa80f0b2ec84ae91844345cb4c7b752c53fb9ad0a8e" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.978741 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 16:07:45 crc kubenswrapper[4874]: I0217 16:07:45.991007 4874 scope.go:117] "RemoveContainer" containerID="5d50358589fe7734aa15e177b695a772b38a152699e2e276924f312680c9ce2f" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.007469 4874 scope.go:117] "RemoveContainer" containerID="611bb3726f9ac8d4d0996006783e1c03244e9b6c3e42e579bbc8646e3bf29f27" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.099925 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.174910 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.308727 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.400512 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.440408 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.468630 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" path="/var/lib/kubelet/pods/19397da4-8b1f-4ec8-969c-2856e64112fc/volumes" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.470334 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" path="/var/lib/kubelet/pods/28be448a-a2cb-4731-85fa-ec01026d5763/volumes" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.471800 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" path="/var/lib/kubelet/pods/52637c6d-7cd3-4761-b70d-4e07e68a6c5d/volumes" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.473974 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c21c3a4-9603-4cd0-a5e3-263aa51d678d" path="/var/lib/kubelet/pods/6c21c3a4-9603-4cd0-a5e3-263aa51d678d/volumes" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.476892 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" path="/var/lib/kubelet/pods/cfc01af4-cec4-4d66-b673-ac10e1797059/volumes" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.618491 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.747201 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.751100 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.772107 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.772926 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.913714 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.927882 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.939308 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 16:07:46 crc kubenswrapper[4874]: I0217 16:07:46.987721 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.144210 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.218148 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.449890 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.468915 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.497411 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.556324 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.560722 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.595914 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.695211 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.817427 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.879470 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.904858 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.937103 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 16:07:47 crc kubenswrapper[4874]: I0217 16:07:47.954973 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.014531 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.025656 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.062018 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.062521 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.109501 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.255310 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.279978 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.315070 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.321895 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.335751 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.341309 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.376174 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.395280 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.401747 4874 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.496861 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.498279 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.513828 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.577261 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.577765 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.726912 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.744767 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.784430 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.852421 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.853464 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.858958 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4874]: I0217 16:07:48.866443 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.088553 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.138516 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.199675 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.245100 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.271030 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.288911 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.330144 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.364245 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.385775 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.460806 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.625636 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.662393 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.730489 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.748725 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.764130 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.871625 4874 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 16:07:49 crc kubenswrapper[4874]: I0217 16:07:49.940933 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.026482 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.081013 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.130244 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.159193 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.202099 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.312424 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.382505 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.399949 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.446750 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.501167 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.536869 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.630201 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.642115 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.714035 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.763971 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.839122 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.888007 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4874]: I0217 16:07:50.939876 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074111 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k2hdj"] Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074349 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" containerName="extract-content" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074364 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" containerName="extract-content" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074379 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerName="extract-utilities" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074388 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerName="extract-utilities" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074400 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" containerName="extract-utilities" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074409 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" containerName="extract-utilities" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074420 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" containerName="installer" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074427 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" containerName="installer" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074438 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074445 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074459 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074468 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074481 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerName="extract-utilities" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074489 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerName="extract-utilities" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074499 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c21c3a4-9603-4cd0-a5e3-263aa51d678d" containerName="marketplace-operator" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074506 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c21c3a4-9603-4cd0-a5e3-263aa51d678d" containerName="marketplace-operator" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074521 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerName="extract-utilities" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074529 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerName="extract-utilities" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074540 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerName="extract-content" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074547 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerName="extract-content" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074556 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074563 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074572 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerName="extract-content" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074577 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerName="extract-content" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074586 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074593 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: E0217 16:07:51.074604 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerName="extract-content" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074612 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerName="extract-content" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074709 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="52637c6d-7cd3-4761-b70d-4e07e68a6c5d" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074722 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="28be448a-a2cb-4731-85fa-ec01026d5763" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074731 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c21c3a4-9603-4cd0-a5e3-263aa51d678d" containerName="marketplace-operator" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074741 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="19397da4-8b1f-4ec8-969c-2856e64112fc" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074752 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc01af4-cec4-4d66-b673-ac10e1797059" containerName="registry-server" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.074767 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c11461e-921f-46b7-ba51-5299829f22f1" containerName="installer" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.075190 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.079412 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.079589 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.079719 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.079824 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.084810 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.155212 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.198599 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqksl\" (UniqueName: \"kubernetes.io/projected/47fbef15-6f0f-42c9-89d2-b68a0bc8eb57-kube-api-access-qqksl\") pod \"marketplace-operator-79b997595-k2hdj\" (UID: \"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.198688 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/47fbef15-6f0f-42c9-89d2-b68a0bc8eb57-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k2hdj\" (UID: \"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.198736 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47fbef15-6f0f-42c9-89d2-b68a0bc8eb57-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k2hdj\" (UID: \"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.215007 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.299481 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/47fbef15-6f0f-42c9-89d2-b68a0bc8eb57-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k2hdj\" (UID: \"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.299542 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47fbef15-6f0f-42c9-89d2-b68a0bc8eb57-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k2hdj\" (UID: \"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.299573 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqksl\" (UniqueName: \"kubernetes.io/projected/47fbef15-6f0f-42c9-89d2-b68a0bc8eb57-kube-api-access-qqksl\") pod \"marketplace-operator-79b997595-k2hdj\" (UID: \"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.300836 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47fbef15-6f0f-42c9-89d2-b68a0bc8eb57-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k2hdj\" (UID: \"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.310827 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/47fbef15-6f0f-42c9-89d2-b68a0bc8eb57-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k2hdj\" (UID: \"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.333263 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqksl\" (UniqueName: \"kubernetes.io/projected/47fbef15-6f0f-42c9-89d2-b68a0bc8eb57-kube-api-access-qqksl\") pod \"marketplace-operator-79b997595-k2hdj\" (UID: \"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.388340 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.388861 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.397273 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.403883 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.484794 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.530149 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.573135 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.575624 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.671773 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.733876 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.909022 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.938386 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.979552 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 16:07:51 crc kubenswrapper[4874]: I0217 16:07:51.989129 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.038908 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.067486 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.101168 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.184733 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.308939 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.341936 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.360900 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.404671 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.474181 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.481820 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.537446 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.578772 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.668988 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.728220 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.728498 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.798313 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.808994 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.955772 4874 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.956003 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55" gracePeriod=5 Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.958861 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.960297 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 16:07:52 crc kubenswrapper[4874]: I0217 16:07:52.974297 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.009060 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.064797 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.109553 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.117196 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.188622 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.202654 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 16:07:53 crc kubenswrapper[4874]: E0217 16:07:53.211514 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod3c11461e_921f_46b7_ba51_5299829f22f1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.373483 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.379729 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.407026 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.410133 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.411683 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.459187 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.496187 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.565744 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.708513 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.745199 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.775598 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.795028 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.824319 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.956091 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 16:07:53 crc kubenswrapper[4874]: I0217 16:07:53.970506 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k2hdj"] Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.101618 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.103885 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.171926 4874 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.177929 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.327391 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.336701 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.467834 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k2hdj"] Feb 17 16:07:54 crc kubenswrapper[4874]: W0217 16:07:54.478303 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47fbef15_6f0f_42c9_89d2_b68a0bc8eb57.slice/crio-559a9d60915df70471fe85d5d6713506b5808e24567d6095747a8c5c962acf4d WatchSource:0}: Error finding container 559a9d60915df70471fe85d5d6713506b5808e24567d6095747a8c5c962acf4d: Status 404 returned error can't find the container with id 559a9d60915df70471fe85d5d6713506b5808e24567d6095747a8c5c962acf4d Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.541567 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.631304 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.811205 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.821746 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.831263 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" event={"ID":"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57","Type":"ContainerStarted","Data":"e44abd4b0b438355a7c956af58bca08c1809dce601035d92d5b4945052a290d5"} Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.831312 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" event={"ID":"47fbef15-6f0f-42c9-89d2-b68a0bc8eb57","Type":"ContainerStarted","Data":"559a9d60915df70471fe85d5d6713506b5808e24567d6095747a8c5c962acf4d"} Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.831514 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.832165 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.833927 4874 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k2hdj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.833986 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" podUID="47fbef15-6f0f-42c9-89d2-b68a0bc8eb57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 17 16:07:54 crc kubenswrapper[4874]: I0217 16:07:54.886224 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 16:07:55 crc kubenswrapper[4874]: I0217 16:07:55.045065 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 16:07:55 crc kubenswrapper[4874]: I0217 16:07:55.174179 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 16:07:55 crc kubenswrapper[4874]: I0217 16:07:55.662503 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 16:07:55 crc kubenswrapper[4874]: I0217 16:07:55.837899 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" Feb 17 16:07:55 crc kubenswrapper[4874]: I0217 16:07:55.861362 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k2hdj" podStartSLOduration=11.861341333 podStartE2EDuration="11.861341333s" podCreationTimestamp="2026-02-17 16:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:07:54.845793118 +0000 UTC m=+285.140181699" watchObservedRunningTime="2026-02-17 16:07:55.861341333 +0000 UTC m=+286.155729894" Feb 17 16:07:56 crc kubenswrapper[4874]: I0217 16:07:56.511340 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 16:07:56 crc kubenswrapper[4874]: I0217 16:07:56.709993 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 16:07:56 crc kubenswrapper[4874]: I0217 16:07:56.885530 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 16:07:57 crc kubenswrapper[4874]: I0217 16:07:57.248594 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.572297 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.572569 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.689784 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.689850 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.689926 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.689963 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.689993 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.689998 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.690026 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.689933 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.690120 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.690300 4874 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.690322 4874 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.690333 4874 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.690343 4874 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.702300 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.791111 4874 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.849371 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.849419 4874 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55" exitCode=137 Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.849459 4874 scope.go:117] "RemoveContainer" containerID="46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.849504 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.865344 4874 scope.go:117] "RemoveContainer" containerID="46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55" Feb 17 16:07:58 crc kubenswrapper[4874]: E0217 16:07:58.865750 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55\": container with ID starting with 46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55 not found: ID does not exist" containerID="46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55" Feb 17 16:07:58 crc kubenswrapper[4874]: I0217 16:07:58.865790 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55"} err="failed to get container status \"46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55\": rpc error: code = NotFound desc = could not find container \"46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55\": container with ID starting with 46619813d8c0e364a264dee257930a02c8dc7afe50c326b8eb6b01e50427dc55 not found: ID does not exist" Feb 17 16:08:00 crc kubenswrapper[4874]: I0217 16:08:00.464592 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 16:08:03 crc kubenswrapper[4874]: E0217 16:08:03.321645 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod3c11461e_921f_46b7_ba51_5299829f22f1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:08:10 crc kubenswrapper[4874]: I0217 16:08:10.240518 4874 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.554552 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb"] Feb 17 16:08:15 crc kubenswrapper[4874]: E0217 16:08:15.555422 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.555442 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.555611 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.556481 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.560347 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.560441 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.560455 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.560659 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.565251 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.574670 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb"] Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.688446 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c7a3e2f9-6de6-46eb-93d9-66c5dd073b28-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-fnrtb\" (UID: \"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.688494 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7a3e2f9-6de6-46eb-93d9-66c5dd073b28-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-fnrtb\" (UID: \"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.688526 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqm2\" (UniqueName: \"kubernetes.io/projected/c7a3e2f9-6de6-46eb-93d9-66c5dd073b28-kube-api-access-fgqm2\") pod \"cluster-monitoring-operator-6d5b84845-fnrtb\" (UID: \"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.790154 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7a3e2f9-6de6-46eb-93d9-66c5dd073b28-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-fnrtb\" (UID: \"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.790194 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c7a3e2f9-6de6-46eb-93d9-66c5dd073b28-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-fnrtb\" (UID: \"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.790218 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgqm2\" (UniqueName: \"kubernetes.io/projected/c7a3e2f9-6de6-46eb-93d9-66c5dd073b28-kube-api-access-fgqm2\") pod \"cluster-monitoring-operator-6d5b84845-fnrtb\" (UID: \"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.792070 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c7a3e2f9-6de6-46eb-93d9-66c5dd073b28-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-fnrtb\" (UID: \"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.797672 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7a3e2f9-6de6-46eb-93d9-66c5dd073b28-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-fnrtb\" (UID: \"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.821786 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgqm2\" (UniqueName: \"kubernetes.io/projected/c7a3e2f9-6de6-46eb-93d9-66c5dd073b28-kube-api-access-fgqm2\") pod \"cluster-monitoring-operator-6d5b84845-fnrtb\" (UID: \"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.884065 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" Feb 17 16:08:15 crc kubenswrapper[4874]: I0217 16:08:15.893587 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 16:08:16 crc kubenswrapper[4874]: I0217 16:08:16.097162 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb"] Feb 17 16:08:16 crc kubenswrapper[4874]: W0217 16:08:16.103748 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7a3e2f9_6de6_46eb_93d9_66c5dd073b28.slice/crio-6e00774516b88a861c7c94e958721bc0501f5264543bab0a1119eb6b41f6b5f4 WatchSource:0}: Error finding container 6e00774516b88a861c7c94e958721bc0501f5264543bab0a1119eb6b41f6b5f4: Status 404 returned error can't find the container with id 6e00774516b88a861c7c94e958721bc0501f5264543bab0a1119eb6b41f6b5f4 Feb 17 16:08:16 crc kubenswrapper[4874]: I0217 16:08:16.372040 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 16:08:16 crc kubenswrapper[4874]: I0217 16:08:16.960625 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" event={"ID":"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28","Type":"ContainerStarted","Data":"6e00774516b88a861c7c94e958721bc0501f5264543bab0a1119eb6b41f6b5f4"} Feb 17 16:08:18 crc kubenswrapper[4874]: I0217 16:08:18.394389 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh"] Feb 17 16:08:18 crc kubenswrapper[4874]: I0217 16:08:18.395620 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:08:18 crc kubenswrapper[4874]: I0217 16:08:18.402232 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 17 16:08:18 crc kubenswrapper[4874]: I0217 16:08:18.404704 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh"] Feb 17 16:08:18 crc kubenswrapper[4874]: I0217 16:08:18.424401 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:08:18 crc kubenswrapper[4874]: I0217 16:08:18.525411 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:08:18 crc kubenswrapper[4874]: E0217 16:08:18.525619 4874 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:18 crc kubenswrapper[4874]: E0217 16:08:18.525720 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates podName:27263f61-9512-43ef-9457-9864e5292d2a nodeName:}" failed. No retries permitted until 2026-02-17 16:08:19.025697236 +0000 UTC m=+309.320085807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-7tfnh" (UID: "27263f61-9512-43ef-9457-9864e5292d2a") : secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:18 crc kubenswrapper[4874]: I0217 16:08:18.980660 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" event={"ID":"c7a3e2f9-6de6-46eb-93d9-66c5dd073b28","Type":"ContainerStarted","Data":"4125ecb1e9b88c3a8133d00e690a982459b5d5511ceaf713160d9a11e8e2761e"} Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.005546 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-fnrtb" podStartSLOduration=2.296573004 podStartE2EDuration="4.005514009s" podCreationTimestamp="2026-02-17 16:08:15 +0000 UTC" firstStartedPulling="2026-02-17 16:08:16.106026026 +0000 UTC m=+306.400414587" lastFinishedPulling="2026-02-17 16:08:17.814967021 +0000 UTC m=+308.109355592" observedRunningTime="2026-02-17 16:08:19.003012856 +0000 UTC m=+309.297401487" watchObservedRunningTime="2026-02-17 16:08:19.005514009 +0000 UTC m=+309.299902650" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.033503 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:08:19 crc kubenswrapper[4874]: E0217 16:08:19.033723 4874 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:19 crc kubenswrapper[4874]: E0217 16:08:19.033809 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates podName:27263f61-9512-43ef-9457-9864e5292d2a nodeName:}" failed. No retries permitted until 2026-02-17 16:08:20.033782012 +0000 UTC m=+310.328170603 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-7tfnh" (UID: "27263f61-9512-43ef-9457-9864e5292d2a") : secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.178497 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.194173 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cw7tb"] Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.194498 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" podUID="9fa024ca-53bd-4aeb-a216-26ed6044cf24" containerName="controller-manager" containerID="cri-o://a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519" gracePeriod=30 Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.263762 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8"] Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.264054 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" podUID="be182c78-fa2c-49ab-9ec4-698854f3ca51" containerName="route-controller-manager" containerID="cri-o://d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab" gracePeriod=30 Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.525102 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.540471 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtqrl\" (UniqueName: \"kubernetes.io/projected/9fa024ca-53bd-4aeb-a216-26ed6044cf24-kube-api-access-wtqrl\") pod \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.540532 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-config\") pod \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.540558 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa024ca-53bd-4aeb-a216-26ed6044cf24-serving-cert\") pod \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.540639 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-client-ca\") pod \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.540669 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-proxy-ca-bundles\") pod \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\" (UID: \"9fa024ca-53bd-4aeb-a216-26ed6044cf24\") " Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.541595 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9fa024ca-53bd-4aeb-a216-26ed6044cf24" (UID: "9fa024ca-53bd-4aeb-a216-26ed6044cf24"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.542093 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-client-ca" (OuterVolumeSpecName: "client-ca") pod "9fa024ca-53bd-4aeb-a216-26ed6044cf24" (UID: "9fa024ca-53bd-4aeb-a216-26ed6044cf24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.542235 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-config" (OuterVolumeSpecName: "config") pod "9fa024ca-53bd-4aeb-a216-26ed6044cf24" (UID: "9fa024ca-53bd-4aeb-a216-26ed6044cf24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.550007 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa024ca-53bd-4aeb-a216-26ed6044cf24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9fa024ca-53bd-4aeb-a216-26ed6044cf24" (UID: "9fa024ca-53bd-4aeb-a216-26ed6044cf24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.550402 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa024ca-53bd-4aeb-a216-26ed6044cf24-kube-api-access-wtqrl" (OuterVolumeSpecName: "kube-api-access-wtqrl") pod "9fa024ca-53bd-4aeb-a216-26ed6044cf24" (UID: "9fa024ca-53bd-4aeb-a216-26ed6044cf24"). InnerVolumeSpecName "kube-api-access-wtqrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.580825 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.642239 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-config\") pod \"be182c78-fa2c-49ab-9ec4-698854f3ca51\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.642307 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt5z7\" (UniqueName: \"kubernetes.io/projected/be182c78-fa2c-49ab-9ec4-698854f3ca51-kube-api-access-tt5z7\") pod \"be182c78-fa2c-49ab-9ec4-698854f3ca51\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.642356 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be182c78-fa2c-49ab-9ec4-698854f3ca51-serving-cert\") pod \"be182c78-fa2c-49ab-9ec4-698854f3ca51\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.642439 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-client-ca\") pod \"be182c78-fa2c-49ab-9ec4-698854f3ca51\" (UID: \"be182c78-fa2c-49ab-9ec4-698854f3ca51\") " Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.642701 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.642723 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa024ca-53bd-4aeb-a216-26ed6044cf24-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.642734 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.642745 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fa024ca-53bd-4aeb-a216-26ed6044cf24-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.642761 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtqrl\" (UniqueName: \"kubernetes.io/projected/9fa024ca-53bd-4aeb-a216-26ed6044cf24-kube-api-access-wtqrl\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.643623 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-client-ca" (OuterVolumeSpecName: "client-ca") pod "be182c78-fa2c-49ab-9ec4-698854f3ca51" (UID: "be182c78-fa2c-49ab-9ec4-698854f3ca51"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.643656 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-config" (OuterVolumeSpecName: "config") pod "be182c78-fa2c-49ab-9ec4-698854f3ca51" (UID: "be182c78-fa2c-49ab-9ec4-698854f3ca51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.645479 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be182c78-fa2c-49ab-9ec4-698854f3ca51-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be182c78-fa2c-49ab-9ec4-698854f3ca51" (UID: "be182c78-fa2c-49ab-9ec4-698854f3ca51"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.646016 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be182c78-fa2c-49ab-9ec4-698854f3ca51-kube-api-access-tt5z7" (OuterVolumeSpecName: "kube-api-access-tt5z7") pod "be182c78-fa2c-49ab-9ec4-698854f3ca51" (UID: "be182c78-fa2c-49ab-9ec4-698854f3ca51"). InnerVolumeSpecName "kube-api-access-tt5z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.744490 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt5z7\" (UniqueName: \"kubernetes.io/projected/be182c78-fa2c-49ab-9ec4-698854f3ca51-kube-api-access-tt5z7\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.744537 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be182c78-fa2c-49ab-9ec4-698854f3ca51-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.744556 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.744573 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be182c78-fa2c-49ab-9ec4-698854f3ca51-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.988448 4874 generic.go:334] "Generic (PLEG): container finished" podID="9fa024ca-53bd-4aeb-a216-26ed6044cf24" containerID="a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519" exitCode=0 Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.988555 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" event={"ID":"9fa024ca-53bd-4aeb-a216-26ed6044cf24","Type":"ContainerDied","Data":"a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519"} Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.988592 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" event={"ID":"9fa024ca-53bd-4aeb-a216-26ed6044cf24","Type":"ContainerDied","Data":"6e55d2385ee4c7c25977516432b40b0a21a78b5de13e9f3d5c473a5042306cd5"} Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.988588 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cw7tb" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.988623 4874 scope.go:117] "RemoveContainer" containerID="a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.991266 4874 generic.go:334] "Generic (PLEG): container finished" podID="be182c78-fa2c-49ab-9ec4-698854f3ca51" containerID="d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab" exitCode=0 Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.991358 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.991529 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" event={"ID":"be182c78-fa2c-49ab-9ec4-698854f3ca51","Type":"ContainerDied","Data":"d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab"} Feb 17 16:08:19 crc kubenswrapper[4874]: I0217 16:08:19.991604 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8" event={"ID":"be182c78-fa2c-49ab-9ec4-698854f3ca51","Type":"ContainerDied","Data":"5b4b4c31394151ba9123c830cf97d25352e8cbd1ae6695ca44e8c315023f83df"} Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.013436 4874 scope.go:117] "RemoveContainer" containerID="a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519" Feb 17 16:08:20 crc kubenswrapper[4874]: E0217 16:08:20.013843 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519\": container with ID starting with a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519 not found: ID does not exist" containerID="a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.013937 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519"} err="failed to get container status \"a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519\": rpc error: code = NotFound desc = could not find container \"a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519\": container with ID starting with a0424e18c7753675ab0f356ad6ceec1d7f991658ebe677a6ebdf033cf33cd519 not found: ID does not exist" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.014014 4874 scope.go:117] "RemoveContainer" containerID="d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.026401 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cw7tb"] Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.029662 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cw7tb"] Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.039941 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8"] Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.045759 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xjzv8"] Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.046524 4874 scope.go:117] "RemoveContainer" containerID="d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab" Feb 17 16:08:20 crc kubenswrapper[4874]: E0217 16:08:20.046934 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab\": container with ID starting with d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab not found: ID does not exist" containerID="d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.047020 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab"} err="failed to get container status \"d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab\": rpc error: code = NotFound desc = could not find container \"d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab\": container with ID starting with d5f8382020b257178f22d1bef30c120c0c6d5b04778360b50988ac118c9e6cab not found: ID does not exist" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.048503 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:08:20 crc kubenswrapper[4874]: E0217 16:08:20.048658 4874 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:20 crc kubenswrapper[4874]: E0217 16:08:20.048738 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates podName:27263f61-9512-43ef-9457-9864e5292d2a nodeName:}" failed. No retries permitted until 2026-02-17 16:08:22.048718782 +0000 UTC m=+312.343107343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-7tfnh" (UID: "27263f61-9512-43ef-9457-9864e5292d2a") : secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.469072 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa024ca-53bd-4aeb-a216-26ed6044cf24" path="/var/lib/kubelet/pods/9fa024ca-53bd-4aeb-a216-26ed6044cf24/volumes" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.470260 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be182c78-fa2c-49ab-9ec4-698854f3ca51" path="/var/lib/kubelet/pods/be182c78-fa2c-49ab-9ec4-698854f3ca51/volumes" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.636604 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.885805 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr"] Feb 17 16:08:20 crc kubenswrapper[4874]: E0217 16:08:20.886534 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be182c78-fa2c-49ab-9ec4-698854f3ca51" containerName="route-controller-manager" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.886556 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="be182c78-fa2c-49ab-9ec4-698854f3ca51" containerName="route-controller-manager" Feb 17 16:08:20 crc kubenswrapper[4874]: E0217 16:08:20.886581 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa024ca-53bd-4aeb-a216-26ed6044cf24" containerName="controller-manager" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.886591 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa024ca-53bd-4aeb-a216-26ed6044cf24" containerName="controller-manager" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.886733 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="be182c78-fa2c-49ab-9ec4-698854f3ca51" containerName="route-controller-manager" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.886750 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa024ca-53bd-4aeb-a216-26ed6044cf24" containerName="controller-manager" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.887231 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.889603 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.890102 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.890219 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.890386 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.890486 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.892132 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.893206 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt"] Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.894366 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.898389 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.898401 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.899630 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.899804 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.903806 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr"] Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.904592 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.904851 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.905115 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.911578 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt"] Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.960734 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-proxy-ca-bundles\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.960851 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-config\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.960909 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4mkf\" (UniqueName: \"kubernetes.io/projected/50b64da8-c13f-443b-8411-dd4334656a27-kube-api-access-x4mkf\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.960958 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-client-ca\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.961006 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-client-ca\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.961108 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfbb50e2-fe08-4d8f-8c57-c56312a77241-serving-cert\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.961187 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b64da8-c13f-443b-8411-dd4334656a27-serving-cert\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.961261 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98rsk\" (UniqueName: \"kubernetes.io/projected/cfbb50e2-fe08-4d8f-8c57-c56312a77241-kube-api-access-98rsk\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:20 crc kubenswrapper[4874]: I0217 16:08:20.961316 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-config\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.062723 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-config\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.062834 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4mkf\" (UniqueName: \"kubernetes.io/projected/50b64da8-c13f-443b-8411-dd4334656a27-kube-api-access-x4mkf\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.062872 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-client-ca\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.062908 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-client-ca\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.062956 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfbb50e2-fe08-4d8f-8c57-c56312a77241-serving-cert\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.063030 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98rsk\" (UniqueName: \"kubernetes.io/projected/cfbb50e2-fe08-4d8f-8c57-c56312a77241-kube-api-access-98rsk\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.063061 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b64da8-c13f-443b-8411-dd4334656a27-serving-cert\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.063123 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-config\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.063192 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-proxy-ca-bundles\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.064533 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-proxy-ca-bundles\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.064709 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-config\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.064919 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-client-ca\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.065358 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-client-ca\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.065972 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-config\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.069582 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfbb50e2-fe08-4d8f-8c57-c56312a77241-serving-cert\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.069838 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b64da8-c13f-443b-8411-dd4334656a27-serving-cert\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.093902 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98rsk\" (UniqueName: \"kubernetes.io/projected/cfbb50e2-fe08-4d8f-8c57-c56312a77241-kube-api-access-98rsk\") pod \"route-controller-manager-5689cc9bcd-mf5xt\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.095994 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4mkf\" (UniqueName: \"kubernetes.io/projected/50b64da8-c13f-443b-8411-dd4334656a27-kube-api-access-x4mkf\") pod \"controller-manager-7cd5dc874b-vvpfr\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.206789 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.220804 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.460689 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr"] Feb 17 16:08:21 crc kubenswrapper[4874]: W0217 16:08:21.464067 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50b64da8_c13f_443b_8411_dd4334656a27.slice/crio-13ca006ada20b448af96535da105437ecf96db5febaa8f27fc69a94a1a83bfa7 WatchSource:0}: Error finding container 13ca006ada20b448af96535da105437ecf96db5febaa8f27fc69a94a1a83bfa7: Status 404 returned error can't find the container with id 13ca006ada20b448af96535da105437ecf96db5febaa8f27fc69a94a1a83bfa7 Feb 17 16:08:21 crc kubenswrapper[4874]: I0217 16:08:21.496169 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt"] Feb 17 16:08:21 crc kubenswrapper[4874]: W0217 16:08:21.502069 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfbb50e2_fe08_4d8f_8c57_c56312a77241.slice/crio-61c85520ad162b905b1ef3c6c0a104e3a8b0be6e20e6fba0a65d16da1deb9002 WatchSource:0}: Error finding container 61c85520ad162b905b1ef3c6c0a104e3a8b0be6e20e6fba0a65d16da1deb9002: Status 404 returned error can't find the container with id 61c85520ad162b905b1ef3c6c0a104e3a8b0be6e20e6fba0a65d16da1deb9002 Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.005232 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" event={"ID":"50b64da8-c13f-443b-8411-dd4334656a27","Type":"ContainerStarted","Data":"02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b"} Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.005276 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" event={"ID":"50b64da8-c13f-443b-8411-dd4334656a27","Type":"ContainerStarted","Data":"13ca006ada20b448af96535da105437ecf96db5febaa8f27fc69a94a1a83bfa7"} Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.005399 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.006357 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" event={"ID":"cfbb50e2-fe08-4d8f-8c57-c56312a77241","Type":"ContainerStarted","Data":"86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a"} Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.006393 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" event={"ID":"cfbb50e2-fe08-4d8f-8c57-c56312a77241","Type":"ContainerStarted","Data":"61c85520ad162b905b1ef3c6c0a104e3a8b0be6e20e6fba0a65d16da1deb9002"} Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.006574 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.009715 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.018800 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" podStartSLOduration=3.018788802 podStartE2EDuration="3.018788802s" podCreationTimestamp="2026-02-17 16:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:08:22.018697949 +0000 UTC m=+312.313086510" watchObservedRunningTime="2026-02-17 16:08:22.018788802 +0000 UTC m=+312.313177373" Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.039422 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" podStartSLOduration=3.039405962 podStartE2EDuration="3.039405962s" podCreationTimestamp="2026-02-17 16:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:08:22.035114733 +0000 UTC m=+312.329503304" watchObservedRunningTime="2026-02-17 16:08:22.039405962 +0000 UTC m=+312.333794523" Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.073719 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:08:22 crc kubenswrapper[4874]: E0217 16:08:22.074596 4874 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:22 crc kubenswrapper[4874]: E0217 16:08:22.074671 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates podName:27263f61-9512-43ef-9457-9864e5292d2a nodeName:}" failed. No retries permitted until 2026-02-17 16:08:26.07464801 +0000 UTC m=+316.369036661 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-7tfnh" (UID: "27263f61-9512-43ef-9457-9864e5292d2a") : secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:22 crc kubenswrapper[4874]: I0217 16:08:22.099405 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:26 crc kubenswrapper[4874]: I0217 16:08:26.124821 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:08:26 crc kubenswrapper[4874]: E0217 16:08:26.124995 4874 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:26 crc kubenswrapper[4874]: E0217 16:08:26.125475 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates podName:27263f61-9512-43ef-9457-9864e5292d2a nodeName:}" failed. No retries permitted until 2026-02-17 16:08:34.125449903 +0000 UTC m=+324.419838494 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-7tfnh" (UID: "27263f61-9512-43ef-9457-9864e5292d2a") : secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.025005 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr"] Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.025620 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" podUID="50b64da8-c13f-443b-8411-dd4334656a27" containerName="controller-manager" containerID="cri-o://02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b" gracePeriod=30 Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.061166 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt"] Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.061448 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" podUID="cfbb50e2-fe08-4d8f-8c57-c56312a77241" containerName="route-controller-manager" containerID="cri-o://86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a" gracePeriod=30 Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.562386 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.674744 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.712759 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98rsk\" (UniqueName: \"kubernetes.io/projected/cfbb50e2-fe08-4d8f-8c57-c56312a77241-kube-api-access-98rsk\") pod \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.712838 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-config\") pod \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.713065 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfbb50e2-fe08-4d8f-8c57-c56312a77241-serving-cert\") pod \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.713141 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4mkf\" (UniqueName: \"kubernetes.io/projected/50b64da8-c13f-443b-8411-dd4334656a27-kube-api-access-x4mkf\") pod \"50b64da8-c13f-443b-8411-dd4334656a27\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.713179 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-client-ca\") pod \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\" (UID: \"cfbb50e2-fe08-4d8f-8c57-c56312a77241\") " Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.713223 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b64da8-c13f-443b-8411-dd4334656a27-serving-cert\") pod \"50b64da8-c13f-443b-8411-dd4334656a27\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.713258 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-client-ca\") pod \"50b64da8-c13f-443b-8411-dd4334656a27\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.713975 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-config" (OuterVolumeSpecName: "config") pod "cfbb50e2-fe08-4d8f-8c57-c56312a77241" (UID: "cfbb50e2-fe08-4d8f-8c57-c56312a77241"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.714271 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-client-ca" (OuterVolumeSpecName: "client-ca") pod "50b64da8-c13f-443b-8411-dd4334656a27" (UID: "50b64da8-c13f-443b-8411-dd4334656a27"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.714693 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-client-ca" (OuterVolumeSpecName: "client-ca") pod "cfbb50e2-fe08-4d8f-8c57-c56312a77241" (UID: "cfbb50e2-fe08-4d8f-8c57-c56312a77241"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.722267 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfbb50e2-fe08-4d8f-8c57-c56312a77241-kube-api-access-98rsk" (OuterVolumeSpecName: "kube-api-access-98rsk") pod "cfbb50e2-fe08-4d8f-8c57-c56312a77241" (UID: "cfbb50e2-fe08-4d8f-8c57-c56312a77241"). InnerVolumeSpecName "kube-api-access-98rsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.722306 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b64da8-c13f-443b-8411-dd4334656a27-kube-api-access-x4mkf" (OuterVolumeSpecName: "kube-api-access-x4mkf") pod "50b64da8-c13f-443b-8411-dd4334656a27" (UID: "50b64da8-c13f-443b-8411-dd4334656a27"). InnerVolumeSpecName "kube-api-access-x4mkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.722339 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b64da8-c13f-443b-8411-dd4334656a27-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "50b64da8-c13f-443b-8411-dd4334656a27" (UID: "50b64da8-c13f-443b-8411-dd4334656a27"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.722664 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbb50e2-fe08-4d8f-8c57-c56312a77241-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cfbb50e2-fe08-4d8f-8c57-c56312a77241" (UID: "cfbb50e2-fe08-4d8f-8c57-c56312a77241"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.814118 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-config\") pod \"50b64da8-c13f-443b-8411-dd4334656a27\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.814195 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-proxy-ca-bundles\") pod \"50b64da8-c13f-443b-8411-dd4334656a27\" (UID: \"50b64da8-c13f-443b-8411-dd4334656a27\") " Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.814487 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98rsk\" (UniqueName: \"kubernetes.io/projected/cfbb50e2-fe08-4d8f-8c57-c56312a77241-kube-api-access-98rsk\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.814514 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.814529 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfbb50e2-fe08-4d8f-8c57-c56312a77241-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.814546 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4mkf\" (UniqueName: \"kubernetes.io/projected/50b64da8-c13f-443b-8411-dd4334656a27-kube-api-access-x4mkf\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.814561 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfbb50e2-fe08-4d8f-8c57-c56312a77241-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.814579 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50b64da8-c13f-443b-8411-dd4334656a27-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.814593 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.815220 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "50b64da8-c13f-443b-8411-dd4334656a27" (UID: "50b64da8-c13f-443b-8411-dd4334656a27"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.815265 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-config" (OuterVolumeSpecName: "config") pod "50b64da8-c13f-443b-8411-dd4334656a27" (UID: "50b64da8-c13f-443b-8411-dd4334656a27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.914807 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4"] Feb 17 16:08:32 crc kubenswrapper[4874]: E0217 16:08:32.915122 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfbb50e2-fe08-4d8f-8c57-c56312a77241" containerName="route-controller-manager" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.915142 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfbb50e2-fe08-4d8f-8c57-c56312a77241" containerName="route-controller-manager" Feb 17 16:08:32 crc kubenswrapper[4874]: E0217 16:08:32.915170 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b64da8-c13f-443b-8411-dd4334656a27" containerName="controller-manager" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.915180 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b64da8-c13f-443b-8411-dd4334656a27" containerName="controller-manager" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.915329 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfbb50e2-fe08-4d8f-8c57-c56312a77241" containerName="route-controller-manager" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.915354 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b64da8-c13f-443b-8411-dd4334656a27" containerName="controller-manager" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.915576 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.915611 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/50b64da8-c13f-443b-8411-dd4334656a27-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.915861 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.928381 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl"] Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.929497 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.931964 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4"] Feb 17 16:08:32 crc kubenswrapper[4874]: I0217 16:08:32.954238 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl"] Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.017064 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gp8\" (UniqueName: \"kubernetes.io/projected/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-kube-api-access-v5gp8\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.017166 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-serving-cert\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.017218 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-client-ca\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.017246 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-client-ca\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.017283 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-config\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.017326 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-proxy-ca-bundles\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.017358 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65f997d-9930-4d62-881d-75ff90a7b2c0-serving-cert\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.017397 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59zjl\" (UniqueName: \"kubernetes.io/projected/b65f997d-9930-4d62-881d-75ff90a7b2c0-kube-api-access-59zjl\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.017432 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-config\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.077764 4874 generic.go:334] "Generic (PLEG): container finished" podID="cfbb50e2-fe08-4d8f-8c57-c56312a77241" containerID="86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a" exitCode=0 Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.077825 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" event={"ID":"cfbb50e2-fe08-4d8f-8c57-c56312a77241","Type":"ContainerDied","Data":"86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a"} Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.077845 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.077880 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt" event={"ID":"cfbb50e2-fe08-4d8f-8c57-c56312a77241","Type":"ContainerDied","Data":"61c85520ad162b905b1ef3c6c0a104e3a8b0be6e20e6fba0a65d16da1deb9002"} Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.077911 4874 scope.go:117] "RemoveContainer" containerID="86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.080277 4874 generic.go:334] "Generic (PLEG): container finished" podID="50b64da8-c13f-443b-8411-dd4334656a27" containerID="02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b" exitCode=0 Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.080320 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" event={"ID":"50b64da8-c13f-443b-8411-dd4334656a27","Type":"ContainerDied","Data":"02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b"} Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.080364 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" event={"ID":"50b64da8-c13f-443b-8411-dd4334656a27","Type":"ContainerDied","Data":"13ca006ada20b448af96535da105437ecf96db5febaa8f27fc69a94a1a83bfa7"} Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.080330 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.104058 4874 scope.go:117] "RemoveContainer" containerID="86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a" Feb 17 16:08:33 crc kubenswrapper[4874]: E0217 16:08:33.107726 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a\": container with ID starting with 86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a not found: ID does not exist" containerID="86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.108196 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a"} err="failed to get container status \"86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a\": rpc error: code = NotFound desc = could not find container \"86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a\": container with ID starting with 86ba4e85497ceedad62b55ee3fc0c713cb9ae58fa1363ed10eba73178474286a not found: ID does not exist" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.108234 4874 scope.go:117] "RemoveContainer" containerID="02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.112208 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr"] Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118114 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cd5dc874b-vvpfr"] Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118460 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65f997d-9930-4d62-881d-75ff90a7b2c0-serving-cert\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118494 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59zjl\" (UniqueName: \"kubernetes.io/projected/b65f997d-9930-4d62-881d-75ff90a7b2c0-kube-api-access-59zjl\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118551 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-config\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118590 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5gp8\" (UniqueName: \"kubernetes.io/projected/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-kube-api-access-v5gp8\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118664 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-serving-cert\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118750 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-client-ca\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118796 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-client-ca\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118843 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-config\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.118887 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-proxy-ca-bundles\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.121207 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-client-ca\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.121507 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-client-ca\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.123453 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-config\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.124063 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-config\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.124547 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65f997d-9930-4d62-881d-75ff90a7b2c0-serving-cert\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.125071 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-proxy-ca-bundles\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.129273 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt"] Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.134583 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-serving-cert\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.134890 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5689cc9bcd-mf5xt"] Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.137436 4874 scope.go:117] "RemoveContainer" containerID="02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b" Feb 17 16:08:33 crc kubenswrapper[4874]: E0217 16:08:33.137966 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b\": container with ID starting with 02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b not found: ID does not exist" containerID="02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.138054 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b"} err="failed to get container status \"02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b\": rpc error: code = NotFound desc = could not find container \"02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b\": container with ID starting with 02c826bbcd57f4e77b091aeca0240eace44adb4e10a8e18d815901f9dcd4575b not found: ID does not exist" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.142359 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59zjl\" (UniqueName: \"kubernetes.io/projected/b65f997d-9930-4d62-881d-75ff90a7b2c0-kube-api-access-59zjl\") pod \"route-controller-manager-656f6c855f-jnpkl\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.142800 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5gp8\" (UniqueName: \"kubernetes.io/projected/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-kube-api-access-v5gp8\") pod \"controller-manager-59c9d6bf67-7kcg4\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.246372 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.259027 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.471471 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4"] Feb 17 16:08:33 crc kubenswrapper[4874]: I0217 16:08:33.774417 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl"] Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.086653 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" event={"ID":"b65f997d-9930-4d62-881d-75ff90a7b2c0","Type":"ContainerStarted","Data":"e2a976409b4fd5757176ccdfdbed0b5d9717b18ddd506dee09fd834749515acd"} Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.087786 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.087856 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" event={"ID":"b65f997d-9930-4d62-881d-75ff90a7b2c0","Type":"ContainerStarted","Data":"46e2986521f3aced9f055c9f9b8ff85b92737620af8476fc1811b5cc57824bce"} Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.089900 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" event={"ID":"6ace4fc8-8ecc-4276-b682-fbd5a087fa45","Type":"ContainerStarted","Data":"31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4"} Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.089937 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" event={"ID":"6ace4fc8-8ecc-4276-b682-fbd5a087fa45","Type":"ContainerStarted","Data":"0f08c99c4052a54d5f944376fcd422a57412049b5a38b2fd6cc96c531f8a6d6a"} Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.090352 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.094726 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.118690 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" podStartSLOduration=2.118672743 podStartE2EDuration="2.118672743s" podCreationTimestamp="2026-02-17 16:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:08:34.116415366 +0000 UTC m=+324.410803957" watchObservedRunningTime="2026-02-17 16:08:34.118672743 +0000 UTC m=+324.413061304" Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.120015 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" podStartSLOduration=2.120010357 podStartE2EDuration="2.120010357s" podCreationTimestamp="2026-02-17 16:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:08:34.103014958 +0000 UTC m=+324.397403519" watchObservedRunningTime="2026-02-17 16:08:34.120010357 +0000 UTC m=+324.414398928" Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.148021 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:08:34 crc kubenswrapper[4874]: E0217 16:08:34.148297 4874 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:34 crc kubenswrapper[4874]: E0217 16:08:34.148382 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates podName:27263f61-9512-43ef-9457-9864e5292d2a nodeName:}" failed. No retries permitted until 2026-02-17 16:08:50.148360032 +0000 UTC m=+340.442748613 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-7tfnh" (UID: "27263f61-9512-43ef-9457-9864e5292d2a") : secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.468440 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b64da8-c13f-443b-8411-dd4334656a27" path="/var/lib/kubelet/pods/50b64da8-c13f-443b-8411-dd4334656a27/volumes" Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.469542 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfbb50e2-fe08-4d8f-8c57-c56312a77241" path="/var/lib/kubelet/pods/cfbb50e2-fe08-4d8f-8c57-c56312a77241/volumes" Feb 17 16:08:34 crc kubenswrapper[4874]: I0217 16:08:34.591695 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:08:43 crc kubenswrapper[4874]: I0217 16:08:43.963362 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6j6n7"] Feb 17 16:08:43 crc kubenswrapper[4874]: I0217 16:08:43.965049 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:43 crc kubenswrapper[4874]: I0217 16:08:43.966752 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 16:08:43 crc kubenswrapper[4874]: I0217 16:08:43.976498 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6j6n7"] Feb 17 16:08:43 crc kubenswrapper[4874]: I0217 16:08:43.987958 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4018f0d2-92f6-4fb2-9055-09a94ebd95a2-catalog-content\") pod \"redhat-operators-6j6n7\" (UID: \"4018f0d2-92f6-4fb2-9055-09a94ebd95a2\") " pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:43 crc kubenswrapper[4874]: I0217 16:08:43.988012 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9s6v\" (UniqueName: \"kubernetes.io/projected/4018f0d2-92f6-4fb2-9055-09a94ebd95a2-kube-api-access-p9s6v\") pod \"redhat-operators-6j6n7\" (UID: \"4018f0d2-92f6-4fb2-9055-09a94ebd95a2\") " pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:43 crc kubenswrapper[4874]: I0217 16:08:43.988058 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4018f0d2-92f6-4fb2-9055-09a94ebd95a2-utilities\") pod \"redhat-operators-6j6n7\" (UID: \"4018f0d2-92f6-4fb2-9055-09a94ebd95a2\") " pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.088802 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4018f0d2-92f6-4fb2-9055-09a94ebd95a2-utilities\") pod \"redhat-operators-6j6n7\" (UID: \"4018f0d2-92f6-4fb2-9055-09a94ebd95a2\") " pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.088881 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4018f0d2-92f6-4fb2-9055-09a94ebd95a2-catalog-content\") pod \"redhat-operators-6j6n7\" (UID: \"4018f0d2-92f6-4fb2-9055-09a94ebd95a2\") " pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.088908 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9s6v\" (UniqueName: \"kubernetes.io/projected/4018f0d2-92f6-4fb2-9055-09a94ebd95a2-kube-api-access-p9s6v\") pod \"redhat-operators-6j6n7\" (UID: \"4018f0d2-92f6-4fb2-9055-09a94ebd95a2\") " pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.089382 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4018f0d2-92f6-4fb2-9055-09a94ebd95a2-utilities\") pod \"redhat-operators-6j6n7\" (UID: \"4018f0d2-92f6-4fb2-9055-09a94ebd95a2\") " pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.089697 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4018f0d2-92f6-4fb2-9055-09a94ebd95a2-catalog-content\") pod \"redhat-operators-6j6n7\" (UID: \"4018f0d2-92f6-4fb2-9055-09a94ebd95a2\") " pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.107774 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9s6v\" (UniqueName: \"kubernetes.io/projected/4018f0d2-92f6-4fb2-9055-09a94ebd95a2-kube-api-access-p9s6v\") pod \"redhat-operators-6j6n7\" (UID: \"4018f0d2-92f6-4fb2-9055-09a94ebd95a2\") " pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.170313 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v9f4j"] Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.171876 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.174591 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.183833 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9f4j"] Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.190358 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfk7m\" (UniqueName: \"kubernetes.io/projected/5d972eec-e9fa-4a61-bfca-998ada5663cd-kube-api-access-sfk7m\") pod \"redhat-marketplace-v9f4j\" (UID: \"5d972eec-e9fa-4a61-bfca-998ada5663cd\") " pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.190538 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d972eec-e9fa-4a61-bfca-998ada5663cd-catalog-content\") pod \"redhat-marketplace-v9f4j\" (UID: \"5d972eec-e9fa-4a61-bfca-998ada5663cd\") " pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.190655 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d972eec-e9fa-4a61-bfca-998ada5663cd-utilities\") pod \"redhat-marketplace-v9f4j\" (UID: \"5d972eec-e9fa-4a61-bfca-998ada5663cd\") " pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.291899 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfk7m\" (UniqueName: \"kubernetes.io/projected/5d972eec-e9fa-4a61-bfca-998ada5663cd-kube-api-access-sfk7m\") pod \"redhat-marketplace-v9f4j\" (UID: \"5d972eec-e9fa-4a61-bfca-998ada5663cd\") " pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.291951 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d972eec-e9fa-4a61-bfca-998ada5663cd-catalog-content\") pod \"redhat-marketplace-v9f4j\" (UID: \"5d972eec-e9fa-4a61-bfca-998ada5663cd\") " pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.291990 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d972eec-e9fa-4a61-bfca-998ada5663cd-utilities\") pod \"redhat-marketplace-v9f4j\" (UID: \"5d972eec-e9fa-4a61-bfca-998ada5663cd\") " pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.292452 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d972eec-e9fa-4a61-bfca-998ada5663cd-utilities\") pod \"redhat-marketplace-v9f4j\" (UID: \"5d972eec-e9fa-4a61-bfca-998ada5663cd\") " pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.292702 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d972eec-e9fa-4a61-bfca-998ada5663cd-catalog-content\") pod \"redhat-marketplace-v9f4j\" (UID: \"5d972eec-e9fa-4a61-bfca-998ada5663cd\") " pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.319500 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfk7m\" (UniqueName: \"kubernetes.io/projected/5d972eec-e9fa-4a61-bfca-998ada5663cd-kube-api-access-sfk7m\") pod \"redhat-marketplace-v9f4j\" (UID: \"5d972eec-e9fa-4a61-bfca-998ada5663cd\") " pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.332121 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.499635 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.750842 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6j6n7"] Feb 17 16:08:44 crc kubenswrapper[4874]: I0217 16:08:44.896492 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9f4j"] Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.155756 4874 generic.go:334] "Generic (PLEG): container finished" podID="4018f0d2-92f6-4fb2-9055-09a94ebd95a2" containerID="b924335d5dae93aa9d6735c61949aff76a86c6afd8e4946d6275d7788ef7ecce" exitCode=0 Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.155854 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j6n7" event={"ID":"4018f0d2-92f6-4fb2-9055-09a94ebd95a2","Type":"ContainerDied","Data":"b924335d5dae93aa9d6735c61949aff76a86c6afd8e4946d6275d7788ef7ecce"} Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.155893 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j6n7" event={"ID":"4018f0d2-92f6-4fb2-9055-09a94ebd95a2","Type":"ContainerStarted","Data":"e900efdfbaabafd97c260561cb74017980e5cf6d8ae6b0485ca3568cf9e6358e"} Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.160066 4874 generic.go:334] "Generic (PLEG): container finished" podID="5d972eec-e9fa-4a61-bfca-998ada5663cd" containerID="4b46ba88b2f8dd283dcdc00c4da3acf4336c9719843cf1a9171ce8ef3a7b79cd" exitCode=0 Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.160183 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9f4j" event={"ID":"5d972eec-e9fa-4a61-bfca-998ada5663cd","Type":"ContainerDied","Data":"4b46ba88b2f8dd283dcdc00c4da3acf4336c9719843cf1a9171ce8ef3a7b79cd"} Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.160227 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9f4j" event={"ID":"5d972eec-e9fa-4a61-bfca-998ada5663cd","Type":"ContainerStarted","Data":"70e3cbe4bd258a9f8c4532f12a6d285da547d8fee3f1ad638b01269ed4bd98dc"} Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.968034 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7h5mh"] Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.970326 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.973464 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 16:08:45 crc kubenswrapper[4874]: I0217 16:08:45.974465 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7h5mh"] Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.022622 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dkj\" (UniqueName: \"kubernetes.io/projected/053b3c4e-8d22-4a31-ba82-2c00f2bcf76f-kube-api-access-p7dkj\") pod \"community-operators-7h5mh\" (UID: \"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f\") " pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.022695 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/053b3c4e-8d22-4a31-ba82-2c00f2bcf76f-catalog-content\") pod \"community-operators-7h5mh\" (UID: \"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f\") " pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.022724 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/053b3c4e-8d22-4a31-ba82-2c00f2bcf76f-utilities\") pod \"community-operators-7h5mh\" (UID: \"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f\") " pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.123845 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7dkj\" (UniqueName: \"kubernetes.io/projected/053b3c4e-8d22-4a31-ba82-2c00f2bcf76f-kube-api-access-p7dkj\") pod \"community-operators-7h5mh\" (UID: \"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f\") " pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.123936 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/053b3c4e-8d22-4a31-ba82-2c00f2bcf76f-catalog-content\") pod \"community-operators-7h5mh\" (UID: \"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f\") " pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.123972 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/053b3c4e-8d22-4a31-ba82-2c00f2bcf76f-utilities\") pod \"community-operators-7h5mh\" (UID: \"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f\") " pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.124644 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/053b3c4e-8d22-4a31-ba82-2c00f2bcf76f-catalog-content\") pod \"community-operators-7h5mh\" (UID: \"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f\") " pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.124670 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/053b3c4e-8d22-4a31-ba82-2c00f2bcf76f-utilities\") pod \"community-operators-7h5mh\" (UID: \"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f\") " pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.157304 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7dkj\" (UniqueName: \"kubernetes.io/projected/053b3c4e-8d22-4a31-ba82-2c00f2bcf76f-kube-api-access-p7dkj\") pod \"community-operators-7h5mh\" (UID: \"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f\") " pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.175405 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j6n7" event={"ID":"4018f0d2-92f6-4fb2-9055-09a94ebd95a2","Type":"ContainerStarted","Data":"1d9535b8b333593bd7a4faaa94edf0c331e549df8585e1c313116d687fb39b26"} Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.179665 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9f4j" event={"ID":"5d972eec-e9fa-4a61-bfca-998ada5663cd","Type":"ContainerStarted","Data":"ea33d86f1232c4bbb91490af89ee0b2add4c6b6bb5ca7725c53a09e5d9e31c05"} Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.305743 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:46 crc kubenswrapper[4874]: I0217 16:08:46.714186 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7h5mh"] Feb 17 16:08:46 crc kubenswrapper[4874]: W0217 16:08:46.736972 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod053b3c4e_8d22_4a31_ba82_2c00f2bcf76f.slice/crio-14184360529b39f2336681d9d48c7d2106f4de2108e5b2126a76b9e6ffd51f58 WatchSource:0}: Error finding container 14184360529b39f2336681d9d48c7d2106f4de2108e5b2126a76b9e6ffd51f58: Status 404 returned error can't find the container with id 14184360529b39f2336681d9d48c7d2106f4de2108e5b2126a76b9e6ffd51f58 Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.187411 4874 generic.go:334] "Generic (PLEG): container finished" podID="053b3c4e-8d22-4a31-ba82-2c00f2bcf76f" containerID="30b6ae7f2391fd763fb11aaab74ce12ecf9dc11324a49e025bb9c215502d107c" exitCode=0 Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.187491 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7h5mh" event={"ID":"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f","Type":"ContainerDied","Data":"30b6ae7f2391fd763fb11aaab74ce12ecf9dc11324a49e025bb9c215502d107c"} Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.187547 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7h5mh" event={"ID":"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f","Type":"ContainerStarted","Data":"14184360529b39f2336681d9d48c7d2106f4de2108e5b2126a76b9e6ffd51f58"} Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.191721 4874 generic.go:334] "Generic (PLEG): container finished" podID="5d972eec-e9fa-4a61-bfca-998ada5663cd" containerID="ea33d86f1232c4bbb91490af89ee0b2add4c6b6bb5ca7725c53a09e5d9e31c05" exitCode=0 Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.191836 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9f4j" event={"ID":"5d972eec-e9fa-4a61-bfca-998ada5663cd","Type":"ContainerDied","Data":"ea33d86f1232c4bbb91490af89ee0b2add4c6b6bb5ca7725c53a09e5d9e31c05"} Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.191899 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9f4j" event={"ID":"5d972eec-e9fa-4a61-bfca-998ada5663cd","Type":"ContainerStarted","Data":"69c3858a2c01d28c60fcb7b9ddefe9f9deb2a952f62efccacdde60246c3fbb7d"} Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.194032 4874 generic.go:334] "Generic (PLEG): container finished" podID="4018f0d2-92f6-4fb2-9055-09a94ebd95a2" containerID="1d9535b8b333593bd7a4faaa94edf0c331e549df8585e1c313116d687fb39b26" exitCode=0 Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.194068 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j6n7" event={"ID":"4018f0d2-92f6-4fb2-9055-09a94ebd95a2","Type":"ContainerDied","Data":"1d9535b8b333593bd7a4faaa94edf0c331e549df8585e1c313116d687fb39b26"} Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.230096 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v9f4j" podStartSLOduration=1.756180431 podStartE2EDuration="3.230057093s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.168368742 +0000 UTC m=+335.462757333" lastFinishedPulling="2026-02-17 16:08:46.642245404 +0000 UTC m=+336.936633995" observedRunningTime="2026-02-17 16:08:47.228942944 +0000 UTC m=+337.523331515" watchObservedRunningTime="2026-02-17 16:08:47.230057093 +0000 UTC m=+337.524445664" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.367455 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-95q59"] Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.370184 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.373636 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.375625 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95q59"] Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.440268 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1-catalog-content\") pod \"certified-operators-95q59\" (UID: \"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1\") " pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.440331 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb478\" (UniqueName: \"kubernetes.io/projected/fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1-kube-api-access-fb478\") pod \"certified-operators-95q59\" (UID: \"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1\") " pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.440360 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1-utilities\") pod \"certified-operators-95q59\" (UID: \"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1\") " pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.541631 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1-catalog-content\") pod \"certified-operators-95q59\" (UID: \"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1\") " pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.541985 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb478\" (UniqueName: \"kubernetes.io/projected/fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1-kube-api-access-fb478\") pod \"certified-operators-95q59\" (UID: \"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1\") " pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.542146 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1-utilities\") pod \"certified-operators-95q59\" (UID: \"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1\") " pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.542304 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1-catalog-content\") pod \"certified-operators-95q59\" (UID: \"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1\") " pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.542661 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1-utilities\") pod \"certified-operators-95q59\" (UID: \"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1\") " pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.561823 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb478\" (UniqueName: \"kubernetes.io/projected/fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1-kube-api-access-fb478\") pod \"certified-operators-95q59\" (UID: \"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1\") " pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:47 crc kubenswrapper[4874]: I0217 16:08:47.695629 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:48 crc kubenswrapper[4874]: I0217 16:08:48.155652 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95q59"] Feb 17 16:08:48 crc kubenswrapper[4874]: I0217 16:08:48.202563 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95q59" event={"ID":"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1","Type":"ContainerStarted","Data":"9a6a3bf7201d473a8036ddf0fea45fb467992193ad4d5018670ffdd470776e60"} Feb 17 16:08:48 crc kubenswrapper[4874]: I0217 16:08:48.205776 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j6n7" event={"ID":"4018f0d2-92f6-4fb2-9055-09a94ebd95a2","Type":"ContainerStarted","Data":"3d8c62e0aa8a0a5fd86866e9e384fb2b863cc9361bac46fb5ed4e4eac2517590"} Feb 17 16:08:48 crc kubenswrapper[4874]: I0217 16:08:48.225455 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6j6n7" podStartSLOduration=2.77310823 podStartE2EDuration="5.225430399s" podCreationTimestamp="2026-02-17 16:08:43 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.16019976 +0000 UTC m=+335.454588351" lastFinishedPulling="2026-02-17 16:08:47.612521949 +0000 UTC m=+337.906910520" observedRunningTime="2026-02-17 16:08:48.223689644 +0000 UTC m=+338.518078225" watchObservedRunningTime="2026-02-17 16:08:48.225430399 +0000 UTC m=+338.519818980" Feb 17 16:08:49 crc kubenswrapper[4874]: I0217 16:08:49.212485 4874 generic.go:334] "Generic (PLEG): container finished" podID="053b3c4e-8d22-4a31-ba82-2c00f2bcf76f" containerID="4b8de624f5bb96db566974084665b6903493bd6f6d645d7245585c34d022b52d" exitCode=0 Feb 17 16:08:49 crc kubenswrapper[4874]: I0217 16:08:49.212560 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7h5mh" event={"ID":"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f","Type":"ContainerDied","Data":"4b8de624f5bb96db566974084665b6903493bd6f6d645d7245585c34d022b52d"} Feb 17 16:08:49 crc kubenswrapper[4874]: I0217 16:08:49.214433 4874 generic.go:334] "Generic (PLEG): container finished" podID="fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1" containerID="27d469b65443358aa63407a526893697c11e2cef374d914e0ba0f5de486af200" exitCode=0 Feb 17 16:08:49 crc kubenswrapper[4874]: I0217 16:08:49.214507 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95q59" event={"ID":"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1","Type":"ContainerDied","Data":"27d469b65443358aa63407a526893697c11e2cef374d914e0ba0f5de486af200"} Feb 17 16:08:50 crc kubenswrapper[4874]: I0217 16:08:50.176500 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:08:50 crc kubenswrapper[4874]: E0217 16:08:50.176652 4874 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:50 crc kubenswrapper[4874]: E0217 16:08:50.176993 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates podName:27263f61-9512-43ef-9457-9864e5292d2a nodeName:}" failed. No retries permitted until 2026-02-17 16:09:22.176968425 +0000 UTC m=+372.471356986 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-7tfnh" (UID: "27263f61-9512-43ef-9457-9864e5292d2a") : secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:08:50 crc kubenswrapper[4874]: I0217 16:08:50.223982 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7h5mh" event={"ID":"053b3c4e-8d22-4a31-ba82-2c00f2bcf76f","Type":"ContainerStarted","Data":"6db2f5e397e8122aafcaa03fd75f170ec865f104e757152078656a2197f0fdeb"} Feb 17 16:08:50 crc kubenswrapper[4874]: I0217 16:08:50.241299 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7h5mh" podStartSLOduration=2.797205578 podStartE2EDuration="5.241282923s" podCreationTimestamp="2026-02-17 16:08:45 +0000 UTC" firstStartedPulling="2026-02-17 16:08:47.189319297 +0000 UTC m=+337.483707868" lastFinishedPulling="2026-02-17 16:08:49.633396652 +0000 UTC m=+339.927785213" observedRunningTime="2026-02-17 16:08:50.239748833 +0000 UTC m=+340.534137394" watchObservedRunningTime="2026-02-17 16:08:50.241282923 +0000 UTC m=+340.535671484" Feb 17 16:08:54 crc kubenswrapper[4874]: I0217 16:08:54.255144 4874 generic.go:334] "Generic (PLEG): container finished" podID="fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1" containerID="167ef2d22fff047854f5695a6b675ea7be7dcc9f80c8ce213beb3eb4bc25b559" exitCode=0 Feb 17 16:08:54 crc kubenswrapper[4874]: I0217 16:08:54.255257 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95q59" event={"ID":"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1","Type":"ContainerDied","Data":"167ef2d22fff047854f5695a6b675ea7be7dcc9f80c8ce213beb3eb4bc25b559"} Feb 17 16:08:54 crc kubenswrapper[4874]: I0217 16:08:54.332895 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:54 crc kubenswrapper[4874]: I0217 16:08:54.333236 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:54 crc kubenswrapper[4874]: I0217 16:08:54.408365 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:54 crc kubenswrapper[4874]: I0217 16:08:54.500570 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:54 crc kubenswrapper[4874]: I0217 16:08:54.500939 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:54 crc kubenswrapper[4874]: I0217 16:08:54.566512 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:55 crc kubenswrapper[4874]: I0217 16:08:55.274744 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95q59" event={"ID":"fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1","Type":"ContainerStarted","Data":"0aa1c1c2b8c4703d21949a880b91668c7c5f0ff4df635af3cbaeda73e3eceb14"} Feb 17 16:08:55 crc kubenswrapper[4874]: I0217 16:08:55.331726 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6j6n7" Feb 17 16:08:55 crc kubenswrapper[4874]: I0217 16:08:55.339123 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v9f4j" Feb 17 16:08:55 crc kubenswrapper[4874]: I0217 16:08:55.350957 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-95q59" podStartSLOduration=2.890527339 podStartE2EDuration="8.350939324s" podCreationTimestamp="2026-02-17 16:08:47 +0000 UTC" firstStartedPulling="2026-02-17 16:08:49.21596236 +0000 UTC m=+339.510350921" lastFinishedPulling="2026-02-17 16:08:54.676374305 +0000 UTC m=+344.970762906" observedRunningTime="2026-02-17 16:08:55.298375221 +0000 UTC m=+345.592763862" watchObservedRunningTime="2026-02-17 16:08:55.350939324 +0000 UTC m=+345.645327895" Feb 17 16:08:56 crc kubenswrapper[4874]: I0217 16:08:56.306847 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:56 crc kubenswrapper[4874]: I0217 16:08:56.307062 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:56 crc kubenswrapper[4874]: I0217 16:08:56.367047 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:57 crc kubenswrapper[4874]: I0217 16:08:57.361450 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7h5mh" Feb 17 16:08:57 crc kubenswrapper[4874]: I0217 16:08:57.696763 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:57 crc kubenswrapper[4874]: I0217 16:08:57.697361 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:57 crc kubenswrapper[4874]: I0217 16:08:57.744488 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:08:59 crc kubenswrapper[4874]: I0217 16:08:59.378838 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-95q59" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.741586 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mkpfd"] Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.743222 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.759622 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mkpfd"] Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.934138 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf18ce27-da52-4ec7-988d-02b42d98fc00-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.934220 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf18ce27-da52-4ec7-988d-02b42d98fc00-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.934294 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf18ce27-da52-4ec7-988d-02b42d98fc00-registry-tls\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.934390 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf18ce27-da52-4ec7-988d-02b42d98fc00-bound-sa-token\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.934573 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf18ce27-da52-4ec7-988d-02b42d98fc00-registry-certificates\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.934719 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwrq\" (UniqueName: \"kubernetes.io/projected/cf18ce27-da52-4ec7-988d-02b42d98fc00-kube-api-access-xrwrq\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.934791 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf18ce27-da52-4ec7-988d-02b42d98fc00-trusted-ca\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.934859 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:21 crc kubenswrapper[4874]: I0217 16:09:21.977282 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.036259 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwrq\" (UniqueName: \"kubernetes.io/projected/cf18ce27-da52-4ec7-988d-02b42d98fc00-kube-api-access-xrwrq\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.036330 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf18ce27-da52-4ec7-988d-02b42d98fc00-trusted-ca\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.036407 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf18ce27-da52-4ec7-988d-02b42d98fc00-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.036471 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf18ce27-da52-4ec7-988d-02b42d98fc00-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.036548 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf18ce27-da52-4ec7-988d-02b42d98fc00-registry-tls\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.036615 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf18ce27-da52-4ec7-988d-02b42d98fc00-bound-sa-token\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.036676 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf18ce27-da52-4ec7-988d-02b42d98fc00-registry-certificates\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.037898 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cf18ce27-da52-4ec7-988d-02b42d98fc00-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.039533 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cf18ce27-da52-4ec7-988d-02b42d98fc00-registry-certificates\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.048195 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cf18ce27-da52-4ec7-988d-02b42d98fc00-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.051486 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf18ce27-da52-4ec7-988d-02b42d98fc00-trusted-ca\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.052928 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cf18ce27-da52-4ec7-988d-02b42d98fc00-registry-tls\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.070547 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf18ce27-da52-4ec7-988d-02b42d98fc00-bound-sa-token\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.070664 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwrq\" (UniqueName: \"kubernetes.io/projected/cf18ce27-da52-4ec7-988d-02b42d98fc00-kube-api-access-xrwrq\") pod \"image-registry-66df7c8f76-mkpfd\" (UID: \"cf18ce27-da52-4ec7-988d-02b42d98fc00\") " pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.073987 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.240983 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.244579 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/27263f61-9512-43ef-9457-9864e5292d2a-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7tfnh\" (UID: \"27263f61-9512-43ef-9457-9864e5292d2a\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.316596 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.509609 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mkpfd"] Feb 17 16:09:22 crc kubenswrapper[4874]: W0217 16:09:22.513933 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf18ce27_da52_4ec7_988d_02b42d98fc00.slice/crio-d751ea8ad285ff84c093cc4eb22425c8536587ed07d95ac976b263ef1d34dc5c WatchSource:0}: Error finding container d751ea8ad285ff84c093cc4eb22425c8536587ed07d95ac976b263ef1d34dc5c: Status 404 returned error can't find the container with id d751ea8ad285ff84c093cc4eb22425c8536587ed07d95ac976b263ef1d34dc5c Feb 17 16:09:22 crc kubenswrapper[4874]: I0217 16:09:22.791750 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh"] Feb 17 16:09:22 crc kubenswrapper[4874]: W0217 16:09:22.800746 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27263f61_9512_43ef_9457_9864e5292d2a.slice/crio-957335a2bdda752bfa75b538f27f779a84a950997193276ad940240e955d9122 WatchSource:0}: Error finding container 957335a2bdda752bfa75b538f27f779a84a950997193276ad940240e955d9122: Status 404 returned error can't find the container with id 957335a2bdda752bfa75b538f27f779a84a950997193276ad940240e955d9122 Feb 17 16:09:23 crc kubenswrapper[4874]: I0217 16:09:23.455473 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" event={"ID":"27263f61-9512-43ef-9457-9864e5292d2a","Type":"ContainerStarted","Data":"957335a2bdda752bfa75b538f27f779a84a950997193276ad940240e955d9122"} Feb 17 16:09:23 crc kubenswrapper[4874]: I0217 16:09:23.457790 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" event={"ID":"cf18ce27-da52-4ec7-988d-02b42d98fc00","Type":"ContainerStarted","Data":"37ccdc3ce6706af9d2d07e77ffcc512b3499a29e17ba405e38503966872f1146"} Feb 17 16:09:23 crc kubenswrapper[4874]: I0217 16:09:23.457852 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" event={"ID":"cf18ce27-da52-4ec7-988d-02b42d98fc00","Type":"ContainerStarted","Data":"d751ea8ad285ff84c093cc4eb22425c8536587ed07d95ac976b263ef1d34dc5c"} Feb 17 16:09:23 crc kubenswrapper[4874]: I0217 16:09:23.458019 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:23 crc kubenswrapper[4874]: I0217 16:09:23.498313 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" podStartSLOduration=2.4982941260000002 podStartE2EDuration="2.498294126s" podCreationTimestamp="2026-02-17 16:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:09:23.492431694 +0000 UTC m=+373.786820335" watchObservedRunningTime="2026-02-17 16:09:23.498294126 +0000 UTC m=+373.792682697" Feb 17 16:09:24 crc kubenswrapper[4874]: I0217 16:09:24.470645 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" event={"ID":"27263f61-9512-43ef-9457-9864e5292d2a","Type":"ContainerStarted","Data":"92923d48397d752fba36023f40766f8c97c2a8743441e5f0386e8f13ffe40cab"} Feb 17 16:09:24 crc kubenswrapper[4874]: I0217 16:09:24.483668 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" podStartSLOduration=65.210189876 podStartE2EDuration="1m6.483646932s" podCreationTimestamp="2026-02-17 16:08:18 +0000 UTC" firstStartedPulling="2026-02-17 16:09:22.804316404 +0000 UTC m=+373.098704965" lastFinishedPulling="2026-02-17 16:09:24.07777343 +0000 UTC m=+374.372162021" observedRunningTime="2026-02-17 16:09:24.483017106 +0000 UTC m=+374.777405707" watchObservedRunningTime="2026-02-17 16:09:24.483646932 +0000 UTC m=+374.778035523" Feb 17 16:09:25 crc kubenswrapper[4874]: I0217 16:09:25.469960 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:09:25 crc kubenswrapper[4874]: I0217 16:09:25.475645 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7tfnh" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.496900 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-jmmww"] Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.497863 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.505499 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.505923 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.506021 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-4l268" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.508338 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.533642 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-jmmww"] Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.606203 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/899f7ae1-f414-4236-939e-069d4e483c75-metrics-client-ca\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.606379 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/899f7ae1-f414-4236-939e-069d4e483c75-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.606426 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7675k\" (UniqueName: \"kubernetes.io/projected/899f7ae1-f414-4236-939e-069d4e483c75-kube-api-access-7675k\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.606497 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/899f7ae1-f414-4236-939e-069d4e483c75-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.707467 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/899f7ae1-f414-4236-939e-069d4e483c75-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.707519 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7675k\" (UniqueName: \"kubernetes.io/projected/899f7ae1-f414-4236-939e-069d4e483c75-kube-api-access-7675k\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.707554 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/899f7ae1-f414-4236-939e-069d4e483c75-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.707588 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/899f7ae1-f414-4236-939e-069d4e483c75-metrics-client-ca\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: E0217 16:09:26.707705 4874 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 16:09:26 crc kubenswrapper[4874]: E0217 16:09:26.707786 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/899f7ae1-f414-4236-939e-069d4e483c75-prometheus-operator-tls podName:899f7ae1-f414-4236-939e-069d4e483c75 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:27.207767914 +0000 UTC m=+377.502156475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/899f7ae1-f414-4236-939e-069d4e483c75-prometheus-operator-tls") pod "prometheus-operator-db54df47d-jmmww" (UID: "899f7ae1-f414-4236-939e-069d4e483c75") : secret "prometheus-operator-tls" not found Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.708704 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/899f7ae1-f414-4236-939e-069d4e483c75-metrics-client-ca\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.713373 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/899f7ae1-f414-4236-939e-069d4e483c75-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:26 crc kubenswrapper[4874]: I0217 16:09:26.725669 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7675k\" (UniqueName: \"kubernetes.io/projected/899f7ae1-f414-4236-939e-069d4e483c75-kube-api-access-7675k\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:27 crc kubenswrapper[4874]: I0217 16:09:27.215196 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/899f7ae1-f414-4236-939e-069d4e483c75-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:27 crc kubenswrapper[4874]: I0217 16:09:27.220706 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/899f7ae1-f414-4236-939e-069d4e483c75-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-jmmww\" (UID: \"899f7ae1-f414-4236-939e-069d4e483c75\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:27 crc kubenswrapper[4874]: I0217 16:09:27.418260 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" Feb 17 16:09:27 crc kubenswrapper[4874]: I0217 16:09:27.725542 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:27 crc kubenswrapper[4874]: I0217 16:09:27.725859 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:27 crc kubenswrapper[4874]: I0217 16:09:27.933645 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-jmmww"] Feb 17 16:09:27 crc kubenswrapper[4874]: W0217 16:09:27.937088 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899f7ae1_f414_4236_939e_069d4e483c75.slice/crio-59afdc707b5af79091f4e92fd65ba239b6f8832964c157f9e4f7bb8da947cd3d WatchSource:0}: Error finding container 59afdc707b5af79091f4e92fd65ba239b6f8832964c157f9e4f7bb8da947cd3d: Status 404 returned error can't find the container with id 59afdc707b5af79091f4e92fd65ba239b6f8832964c157f9e4f7bb8da947cd3d Feb 17 16:09:28 crc kubenswrapper[4874]: I0217 16:09:28.487109 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" event={"ID":"899f7ae1-f414-4236-939e-069d4e483c75","Type":"ContainerStarted","Data":"59afdc707b5af79091f4e92fd65ba239b6f8832964c157f9e4f7bb8da947cd3d"} Feb 17 16:09:29 crc kubenswrapper[4874]: I0217 16:09:29.500248 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" event={"ID":"899f7ae1-f414-4236-939e-069d4e483c75","Type":"ContainerStarted","Data":"f2bf37a8019b17d6d28e6a9016a60f53daba47f2eaccb690e8e09fbd44e4ee1a"} Feb 17 16:09:30 crc kubenswrapper[4874]: I0217 16:09:30.510048 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" event={"ID":"899f7ae1-f414-4236-939e-069d4e483c75","Type":"ContainerStarted","Data":"2c726b6d48565c83379aadb3513a000948b97c6e7b19ee309e1462b3c207d3e7"} Feb 17 16:09:30 crc kubenswrapper[4874]: I0217 16:09:30.539900 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-jmmww" podStartSLOduration=3.164149748 podStartE2EDuration="4.539871595s" podCreationTimestamp="2026-02-17 16:09:26 +0000 UTC" firstStartedPulling="2026-02-17 16:09:27.938880222 +0000 UTC m=+378.233268783" lastFinishedPulling="2026-02-17 16:09:29.314602069 +0000 UTC m=+379.608990630" observedRunningTime="2026-02-17 16:09:30.533290054 +0000 UTC m=+380.827678705" watchObservedRunningTime="2026-02-17 16:09:30.539871595 +0000 UTC m=+380.834260166" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.888248 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-pqd66"] Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.889863 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.891268 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.891753 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.903825 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-b7wrv" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.908901 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4"] Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.910032 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.911837 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k"] Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.912430 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.912516 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.912990 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-ppsk7" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.913425 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.915136 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-rgfn8" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.915312 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.916102 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.916224 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.931809 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/071a9734-759a-44a4-b490-5cb1ef6838f3-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.931850 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e987646-9c60-4bae-8352-a7f4136053d7-sys\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.931875 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-tls\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.931900 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmt7t\" (UniqueName: \"kubernetes.io/projected/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-api-access-cmt7t\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.931963 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932013 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932219 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/071a9734-759a-44a4-b490-5cb1ef6838f3-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932269 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2e987646-9c60-4bae-8352-a7f4136053d7-root\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932289 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e987646-9c60-4bae-8352-a7f4136053d7-metrics-client-ca\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932328 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6kgd\" (UniqueName: \"kubernetes.io/projected/2e987646-9c60-4bae-8352-a7f4136053d7-kube-api-access-p6kgd\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932346 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/64de85cf-0328-4726-9609-30fbd6acdf09-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932369 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-wtmp\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932407 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p45lt\" (UniqueName: \"kubernetes.io/projected/64de85cf-0328-4726-9609-30fbd6acdf09-kube-api-access-p45lt\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932459 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/64de85cf-0328-4726-9609-30fbd6acdf09-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932478 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/64de85cf-0328-4726-9609-30fbd6acdf09-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932500 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932532 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-textfile\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.932579 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.935932 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4"] Feb 17 16:09:32 crc kubenswrapper[4874]: I0217 16:09:32.938861 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k"] Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033374 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/071a9734-759a-44a4-b490-5cb1ef6838f3-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033419 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e987646-9c60-4bae-8352-a7f4136053d7-sys\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033442 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-tls\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033463 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmt7t\" (UniqueName: \"kubernetes.io/projected/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-api-access-cmt7t\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033483 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033500 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033518 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/071a9734-759a-44a4-b490-5cb1ef6838f3-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033534 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2e987646-9c60-4bae-8352-a7f4136053d7-root\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033551 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e987646-9c60-4bae-8352-a7f4136053d7-metrics-client-ca\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033571 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6kgd\" (UniqueName: \"kubernetes.io/projected/2e987646-9c60-4bae-8352-a7f4136053d7-kube-api-access-p6kgd\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033583 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2e987646-9c60-4bae-8352-a7f4136053d7-sys\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033586 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/64de85cf-0328-4726-9609-30fbd6acdf09-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033669 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-wtmp\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033683 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2e987646-9c60-4bae-8352-a7f4136053d7-root\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033699 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p45lt\" (UniqueName: \"kubernetes.io/projected/64de85cf-0328-4726-9609-30fbd6acdf09-kube-api-access-p45lt\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033737 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/64de85cf-0328-4726-9609-30fbd6acdf09-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033775 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/64de85cf-0328-4726-9609-30fbd6acdf09-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033802 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033865 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-textfile\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033873 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-wtmp\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.033891 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: E0217 16:09:33.033946 4874 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 17 16:09:33 crc kubenswrapper[4874]: E0217 16:09:33.033959 4874 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Feb 17 16:09:33 crc kubenswrapper[4874]: E0217 16:09:33.033989 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-tls podName:071a9734-759a-44a4-b490-5cb1ef6838f3 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:33.533972497 +0000 UTC m=+383.828361058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-9c65k" (UID: "071a9734-759a-44a4-b490-5cb1ef6838f3") : secret "kube-state-metrics-tls" not found Feb 17 16:09:33 crc kubenswrapper[4874]: E0217 16:09:33.034041 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64de85cf-0328-4726-9609-30fbd6acdf09-openshift-state-metrics-tls podName:64de85cf-0328-4726-9609-30fbd6acdf09 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:33.534019029 +0000 UTC m=+383.828407580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/64de85cf-0328-4726-9609-30fbd6acdf09-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-f8lw4" (UID: "64de85cf-0328-4726-9609-30fbd6acdf09") : secret "openshift-state-metrics-tls" not found Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.034456 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/071a9734-759a-44a4-b490-5cb1ef6838f3-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.034483 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e987646-9c60-4bae-8352-a7f4136053d7-metrics-client-ca\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.034555 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-textfile\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.034804 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.034842 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/64de85cf-0328-4726-9609-30fbd6acdf09-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.036512 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/071a9734-759a-44a4-b490-5cb1ef6838f3-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.039570 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/64de85cf-0328-4726-9609-30fbd6acdf09-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.039579 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-tls\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.039613 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.040174 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2e987646-9c60-4bae-8352-a7f4136053d7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.048362 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p45lt\" (UniqueName: \"kubernetes.io/projected/64de85cf-0328-4726-9609-30fbd6acdf09-kube-api-access-p45lt\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.052584 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6kgd\" (UniqueName: \"kubernetes.io/projected/2e987646-9c60-4bae-8352-a7f4136053d7-kube-api-access-p6kgd\") pod \"node-exporter-pqd66\" (UID: \"2e987646-9c60-4bae-8352-a7f4136053d7\") " pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.055842 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmt7t\" (UniqueName: \"kubernetes.io/projected/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-api-access-cmt7t\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.203340 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-pqd66" Feb 17 16:09:33 crc kubenswrapper[4874]: W0217 16:09:33.222312 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e987646_9c60_4bae_8352_a7f4136053d7.slice/crio-6ab137283277e4c7043e5ebd1474e166a2a56595deae3e6a9e4e540587bd0218 WatchSource:0}: Error finding container 6ab137283277e4c7043e5ebd1474e166a2a56595deae3e6a9e4e540587bd0218: Status 404 returned error can't find the container with id 6ab137283277e4c7043e5ebd1474e166a2a56595deae3e6a9e4e540587bd0218 Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.530419 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-pqd66" event={"ID":"2e987646-9c60-4bae-8352-a7f4136053d7","Type":"ContainerStarted","Data":"6ab137283277e4c7043e5ebd1474e166a2a56595deae3e6a9e4e540587bd0218"} Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.539612 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.539686 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/64de85cf-0328-4726-9609-30fbd6acdf09-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.543490 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/071a9734-759a-44a4-b490-5cb1ef6838f3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9c65k\" (UID: \"071a9734-759a-44a4-b490-5cb1ef6838f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.544296 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/64de85cf-0328-4726-9609-30fbd6acdf09-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-f8lw4\" (UID: \"64de85cf-0328-4726-9609-30fbd6acdf09\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.823858 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.835357 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.958905 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.975120 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.983046 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.983460 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.984693 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.984868 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.985011 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.985150 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.985363 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-pvsdc" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.988127 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 17 16:09:33 crc kubenswrapper[4874]: I0217 16:09:33.996121 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.031151 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147052 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147118 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147143 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/137ce3c3-521e-4df1-8294-7900b32e2886-tls-assets\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147158 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjv8c\" (UniqueName: \"kubernetes.io/projected/137ce3c3-521e-4df1-8294-7900b32e2886-kube-api-access-tjv8c\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147175 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-web-config\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147195 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/137ce3c3-521e-4df1-8294-7900b32e2886-config-out\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147236 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147399 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-config-volume\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147456 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/137ce3c3-521e-4df1-8294-7900b32e2886-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147506 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/137ce3c3-521e-4df1-8294-7900b32e2886-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147559 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/137ce3c3-521e-4df1-8294-7900b32e2886-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.147586 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248400 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-config-volume\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248438 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/137ce3c3-521e-4df1-8294-7900b32e2886-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248470 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/137ce3c3-521e-4df1-8294-7900b32e2886-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248512 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/137ce3c3-521e-4df1-8294-7900b32e2886-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248535 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248567 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248597 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248624 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/137ce3c3-521e-4df1-8294-7900b32e2886-tls-assets\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248644 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjv8c\" (UniqueName: \"kubernetes.io/projected/137ce3c3-521e-4df1-8294-7900b32e2886-kube-api-access-tjv8c\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248665 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-web-config\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248719 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/137ce3c3-521e-4df1-8294-7900b32e2886-config-out\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.248869 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.249133 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/137ce3c3-521e-4df1-8294-7900b32e2886-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.249738 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/137ce3c3-521e-4df1-8294-7900b32e2886-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.249912 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/137ce3c3-521e-4df1-8294-7900b32e2886-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.253392 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.253940 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/137ce3c3-521e-4df1-8294-7900b32e2886-tls-assets\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.255739 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.255783 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-config-volume\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.255868 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.257928 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-web-config\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.259933 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/137ce3c3-521e-4df1-8294-7900b32e2886-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.260359 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/137ce3c3-521e-4df1-8294-7900b32e2886-config-out\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.264559 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjv8c\" (UniqueName: \"kubernetes.io/projected/137ce3c3-521e-4df1-8294-7900b32e2886-kube-api-access-tjv8c\") pod \"alertmanager-main-0\" (UID: \"137ce3c3-521e-4df1-8294-7900b32e2886\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.338681 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.387606 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4"] Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.464877 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k"] Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.935104 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn"] Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.937035 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.940729 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.940746 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.940759 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.941053 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-p6r2q" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.941190 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.941313 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.941382 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-c0oo18d9evtsh" Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.959813 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn"] Feb 17 16:09:34 crc kubenswrapper[4874]: I0217 16:09:34.997559 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 16:09:35 crc kubenswrapper[4874]: W0217 16:09:35.003665 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137ce3c3_521e_4df1_8294_7900b32e2886.slice/crio-54393ff28e213388fc6e6f77f8e95295183cfa971311d3d9c9aa02a51630cb94 WatchSource:0}: Error finding container 54393ff28e213388fc6e6f77f8e95295183cfa971311d3d9c9aa02a51630cb94: Status 404 returned error can't find the container with id 54393ff28e213388fc6e6f77f8e95295183cfa971311d3d9c9aa02a51630cb94 Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.061304 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.061606 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-grpc-tls\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.061631 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.061667 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.061688 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-tls\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.061713 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-metrics-client-ca\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.061754 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2527f\" (UniqueName: \"kubernetes.io/projected/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-kube-api-access-2527f\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.061770 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.163364 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.163416 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-tls\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.163452 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-metrics-client-ca\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.163507 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.163546 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2527f\" (UniqueName: \"kubernetes.io/projected/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-kube-api-access-2527f\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.163571 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.163610 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-grpc-tls\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.163637 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.166989 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-metrics-client-ca\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.169987 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.170000 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.170291 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.170590 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.180011 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-thanos-querier-tls\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.180858 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-secret-grpc-tls\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.192906 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2527f\" (UniqueName: \"kubernetes.io/projected/f960b1c3-0d01-4d4a-afe9-647ad835f4ba-kube-api-access-2527f\") pod \"thanos-querier-8648d6cb6d-f2ztn\" (UID: \"f960b1c3-0d01-4d4a-afe9-647ad835f4ba\") " pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.302860 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.633305 4874 generic.go:334] "Generic (PLEG): container finished" podID="2e987646-9c60-4bae-8352-a7f4136053d7" containerID="02abfb98aa6ad9351ff75691e2a8dd2f9f633080416b8b889241046f9a036a5d" exitCode=0 Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.633433 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-pqd66" event={"ID":"2e987646-9c60-4bae-8352-a7f4136053d7","Type":"ContainerDied","Data":"02abfb98aa6ad9351ff75691e2a8dd2f9f633080416b8b889241046f9a036a5d"} Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.647665 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" event={"ID":"071a9734-759a-44a4-b490-5cb1ef6838f3","Type":"ContainerStarted","Data":"14bb34ea64d793d3ecbbd861b0b55351202dd2adc4901ce36efd3b1be56171ae"} Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.655226 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" event={"ID":"64de85cf-0328-4726-9609-30fbd6acdf09","Type":"ContainerStarted","Data":"39d12dde90d172215423533e6a38540e3b6d542328f355453b75778e82d0e2ae"} Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.655767 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" event={"ID":"64de85cf-0328-4726-9609-30fbd6acdf09","Type":"ContainerStarted","Data":"c4155ec4731230a5c10bbb4d5cf68e8b48a9b1079adaadfc2e4cee2fae32e3bc"} Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.655896 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" event={"ID":"64de85cf-0328-4726-9609-30fbd6acdf09","Type":"ContainerStarted","Data":"1bb34e946aba9741e84321de1404c044600ae7922efae71d80bd8a5bf8de2fd9"} Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.656918 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"137ce3c3-521e-4df1-8294-7900b32e2886","Type":"ContainerStarted","Data":"54393ff28e213388fc6e6f77f8e95295183cfa971311d3d9c9aa02a51630cb94"} Feb 17 16:09:35 crc kubenswrapper[4874]: I0217 16:09:35.997888 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn"] Feb 17 16:09:36 crc kubenswrapper[4874]: W0217 16:09:36.009781 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf960b1c3_0d01_4d4a_afe9_647ad835f4ba.slice/crio-8b26d3deef09e2ffc183ab1eaf352cd4ddf7f399fe3d666b4462fe4b3ae8ff63 WatchSource:0}: Error finding container 8b26d3deef09e2ffc183ab1eaf352cd4ddf7f399fe3d666b4462fe4b3ae8ff63: Status 404 returned error can't find the container with id 8b26d3deef09e2ffc183ab1eaf352cd4ddf7f399fe3d666b4462fe4b3ae8ff63 Feb 17 16:09:36 crc kubenswrapper[4874]: I0217 16:09:36.666228 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-pqd66" event={"ID":"2e987646-9c60-4bae-8352-a7f4136053d7","Type":"ContainerStarted","Data":"4c86ce23cd0deabfd925112515acabfad4abcb9e8bb39df7689a604b6ce15337"} Feb 17 16:09:36 crc kubenswrapper[4874]: I0217 16:09:36.666286 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-pqd66" event={"ID":"2e987646-9c60-4bae-8352-a7f4136053d7","Type":"ContainerStarted","Data":"c1e9fbfe086235665ec6cab5337cfcfe883ab267c35c0d590af8050f61932ff8"} Feb 17 16:09:36 crc kubenswrapper[4874]: I0217 16:09:36.667333 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" event={"ID":"f960b1c3-0d01-4d4a-afe9-647ad835f4ba","Type":"ContainerStarted","Data":"8b26d3deef09e2ffc183ab1eaf352cd4ddf7f399fe3d666b4462fe4b3ae8ff63"} Feb 17 16:09:36 crc kubenswrapper[4874]: I0217 16:09:36.681749 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-pqd66" podStartSLOduration=3.264132296 podStartE2EDuration="4.681730328s" podCreationTimestamp="2026-02-17 16:09:32 +0000 UTC" firstStartedPulling="2026-02-17 16:09:33.224191049 +0000 UTC m=+383.518579610" lastFinishedPulling="2026-02-17 16:09:34.641789071 +0000 UTC m=+384.936177642" observedRunningTime="2026-02-17 16:09:36.681276987 +0000 UTC m=+386.975665578" watchObservedRunningTime="2026-02-17 16:09:36.681730328 +0000 UTC m=+386.976118910" Feb 17 16:09:37 crc kubenswrapper[4874]: I0217 16:09:37.684641 4874 generic.go:334] "Generic (PLEG): container finished" podID="137ce3c3-521e-4df1-8294-7900b32e2886" containerID="7a8183f4dc3991d7a9ba43fca3cd51a136c02f60bc79eb0dfadaba8a4d0556e2" exitCode=0 Feb 17 16:09:37 crc kubenswrapper[4874]: I0217 16:09:37.685048 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"137ce3c3-521e-4df1-8294-7900b32e2886","Type":"ContainerDied","Data":"7a8183f4dc3991d7a9ba43fca3cd51a136c02f60bc79eb0dfadaba8a4d0556e2"} Feb 17 16:09:37 crc kubenswrapper[4874]: I0217 16:09:37.691838 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" event={"ID":"071a9734-759a-44a4-b490-5cb1ef6838f3","Type":"ContainerStarted","Data":"159d287f276b019300ca9997653912965d4df3bc7d956b7b21d1f23899bbe400"} Feb 17 16:09:37 crc kubenswrapper[4874]: I0217 16:09:37.691955 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" event={"ID":"071a9734-759a-44a4-b490-5cb1ef6838f3","Type":"ContainerStarted","Data":"b04683aec7a96ffcdc7f5fc7f870151886ddc086a43f318a71f5d92aa4321d5b"} Feb 17 16:09:37 crc kubenswrapper[4874]: I0217 16:09:37.695150 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" event={"ID":"64de85cf-0328-4726-9609-30fbd6acdf09","Type":"ContainerStarted","Data":"8aaa70e611129ee67c9c02938c0c751eeda19b6985ffc33a095b95e601f3e32e"} Feb 17 16:09:37 crc kubenswrapper[4874]: I0217 16:09:37.742217 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-f8lw4" podStartSLOduration=3.425608532 podStartE2EDuration="5.742200341s" podCreationTimestamp="2026-02-17 16:09:32 +0000 UTC" firstStartedPulling="2026-02-17 16:09:34.859807033 +0000 UTC m=+385.154195614" lastFinishedPulling="2026-02-17 16:09:37.176398862 +0000 UTC m=+387.470787423" observedRunningTime="2026-02-17 16:09:37.738593438 +0000 UTC m=+388.032981999" watchObservedRunningTime="2026-02-17 16:09:37.742200341 +0000 UTC m=+388.036588902" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.244721 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-69f8c984c7-wtsdw"] Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.245590 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.252044 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.252385 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.252434 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.252587 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-cj26m" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.252666 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-c602291l7o71q" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.252709 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.276341 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-69f8c984c7-wtsdw"] Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.281132 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-audit-log\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.281208 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.281262 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-secret-metrics-client-certs\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.281299 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78n5h\" (UniqueName: \"kubernetes.io/projected/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-kube-api-access-78n5h\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.281333 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-metrics-server-audit-profiles\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.281435 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-secret-metrics-server-tls\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.281467 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-client-ca-bundle\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.382957 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-secret-metrics-server-tls\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.383020 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-client-ca-bundle\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.383068 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-audit-log\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.383113 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.383145 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-secret-metrics-client-certs\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.383163 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78n5h\" (UniqueName: \"kubernetes.io/projected/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-kube-api-access-78n5h\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.383179 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-metrics-server-audit-profiles\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.384591 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-metrics-server-audit-profiles\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.384891 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-audit-log\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.385334 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.388697 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-secret-metrics-server-tls\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.396186 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-client-ca-bundle\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.403674 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78n5h\" (UniqueName: \"kubernetes.io/projected/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-kube-api-access-78n5h\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.407570 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3a216c9c-8f23-46b4-b6f8-b1f24c73ed52-secret-metrics-client-certs\") pod \"metrics-server-69f8c984c7-wtsdw\" (UID: \"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52\") " pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.568769 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.639571 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv"] Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.640799 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.645354 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.645745 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.651698 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv"] Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.686854 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/66f020dd-a67e-42d0-8f03-a8c12dee1dbd-monitoring-plugin-cert\") pod \"monitoring-plugin-694c74f7bf-dr4fv\" (UID: \"66f020dd-a67e-42d0-8f03-a8c12dee1dbd\") " pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.703864 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" event={"ID":"071a9734-759a-44a4-b490-5cb1ef6838f3","Type":"ContainerStarted","Data":"c002946850a8deeb5af57787de4e346e980a5013b1ccba0dad4e21cc7757c09c"} Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.724190 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9c65k" podStartSLOduration=4.135430035 podStartE2EDuration="6.72416672s" podCreationTimestamp="2026-02-17 16:09:32 +0000 UTC" firstStartedPulling="2026-02-17 16:09:34.593842378 +0000 UTC m=+384.888230969" lastFinishedPulling="2026-02-17 16:09:37.182579083 +0000 UTC m=+387.476967654" observedRunningTime="2026-02-17 16:09:38.717859466 +0000 UTC m=+389.012248037" watchObservedRunningTime="2026-02-17 16:09:38.72416672 +0000 UTC m=+389.018555291" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.788105 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/66f020dd-a67e-42d0-8f03-a8c12dee1dbd-monitoring-plugin-cert\") pod \"monitoring-plugin-694c74f7bf-dr4fv\" (UID: \"66f020dd-a67e-42d0-8f03-a8c12dee1dbd\") " pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.794271 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/66f020dd-a67e-42d0-8f03-a8c12dee1dbd-monitoring-plugin-cert\") pod \"monitoring-plugin-694c74f7bf-dr4fv\" (UID: \"66f020dd-a67e-42d0-8f03-a8c12dee1dbd\") " pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.948815 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8687f49f77-skmg5"] Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.953650 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.954463 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8687f49f77-skmg5"] Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.990917 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-service-ca\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.991341 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-oauth-serving-cert\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.991448 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-oauth-config\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.991543 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-serving-cert\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.991641 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdx2j\" (UniqueName: \"kubernetes.io/projected/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-kube-api-access-fdx2j\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.991793 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-config\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:38 crc kubenswrapper[4874]: I0217 16:09:38.991889 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-trusted-ca-bundle\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.031555 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.072612 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-69f8c984c7-wtsdw"] Feb 17 16:09:39 crc kubenswrapper[4874]: W0217 16:09:39.083024 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a216c9c_8f23_46b4_b6f8_b1f24c73ed52.slice/crio-970522cb837c380c05e07a6831395d1b71447772c35bbe3b94af313ae4ef84df WatchSource:0}: Error finding container 970522cb837c380c05e07a6831395d1b71447772c35bbe3b94af313ae4ef84df: Status 404 returned error can't find the container with id 970522cb837c380c05e07a6831395d1b71447772c35bbe3b94af313ae4ef84df Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.093131 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-oauth-serving-cert\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.093184 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-oauth-config\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.093214 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-serving-cert\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.093241 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdx2j\" (UniqueName: \"kubernetes.io/projected/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-kube-api-access-fdx2j\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.093279 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-config\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.093301 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-trusted-ca-bundle\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.093332 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-service-ca\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.094583 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-service-ca\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.095341 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-config\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.096593 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-trusted-ca-bundle\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.100601 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-oauth-config\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.103462 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-serving-cert\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.103526 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-oauth-serving-cert\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.116294 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdx2j\" (UniqueName: \"kubernetes.io/projected/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-kube-api-access-fdx2j\") pod \"console-8687f49f77-skmg5\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.126943 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4"] Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.127183 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" podUID="6ace4fc8-8ecc-4276-b682-fbd5a087fa45" containerName="controller-manager" containerID="cri-o://31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4" gracePeriod=30 Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.153447 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl"] Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.153992 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" podUID="b65f997d-9930-4d62-881d-75ff90a7b2c0" containerName="route-controller-manager" containerID="cri-o://e2a976409b4fd5757176ccdfdbed0b5d9717b18ddd506dee09fd834749515acd" gracePeriod=30 Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.285583 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.548710 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv"] Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.654364 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.663347 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.673614 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.674303 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-9i23ggtcj04oi" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.674494 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.674808 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.674976 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.675447 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.675988 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.676160 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.676648 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.677392 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.677497 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-4qwjd" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.677554 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.677628 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.678471 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.679831 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.708562 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-client-ca\") pod \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.708626 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-config\") pod \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.708672 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-serving-cert\") pod \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.708715 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5gp8\" (UniqueName: \"kubernetes.io/projected/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-kube-api-access-v5gp8\") pod \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.708781 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-proxy-ca-bundles\") pod \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\" (UID: \"6ace4fc8-8ecc-4276-b682-fbd5a087fa45\") " Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709266 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-config\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709291 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-web-config\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709324 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709349 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bdd0cc15-b5bd-4703-8d69-5569eba61152-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709369 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709399 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709426 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709445 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr2gg\" (UniqueName: \"kubernetes.io/projected/bdd0cc15-b5bd-4703-8d69-5569eba61152-kube-api-access-kr2gg\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709466 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709488 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709506 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709529 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709553 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709577 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709594 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709616 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709635 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/bdd0cc15-b5bd-4703-8d69-5569eba61152-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.709657 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bdd0cc15-b5bd-4703-8d69-5569eba61152-config-out\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.710465 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-config" (OuterVolumeSpecName: "config") pod "6ace4fc8-8ecc-4276-b682-fbd5a087fa45" (UID: "6ace4fc8-8ecc-4276-b682-fbd5a087fa45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.711301 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6ace4fc8-8ecc-4276-b682-fbd5a087fa45" (UID: "6ace4fc8-8ecc-4276-b682-fbd5a087fa45"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.712804 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ace4fc8-8ecc-4276-b682-fbd5a087fa45" (UID: "6ace4fc8-8ecc-4276-b682-fbd5a087fa45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.715093 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-kube-api-access-v5gp8" (OuterVolumeSpecName: "kube-api-access-v5gp8") pod "6ace4fc8-8ecc-4276-b682-fbd5a087fa45" (UID: "6ace4fc8-8ecc-4276-b682-fbd5a087fa45"). InnerVolumeSpecName "kube-api-access-v5gp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.720146 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8687f49f77-skmg5"] Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.740580 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ace4fc8-8ecc-4276-b682-fbd5a087fa45" (UID: "6ace4fc8-8ecc-4276-b682-fbd5a087fa45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.749903 4874 generic.go:334] "Generic (PLEG): container finished" podID="b65f997d-9930-4d62-881d-75ff90a7b2c0" containerID="e2a976409b4fd5757176ccdfdbed0b5d9717b18ddd506dee09fd834749515acd" exitCode=0 Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.749994 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" event={"ID":"b65f997d-9930-4d62-881d-75ff90a7b2c0","Type":"ContainerDied","Data":"e2a976409b4fd5757176ccdfdbed0b5d9717b18ddd506dee09fd834749515acd"} Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.760116 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" event={"ID":"f960b1c3-0d01-4d4a-afe9-647ad835f4ba","Type":"ContainerStarted","Data":"9e8056eb624dccfb3a1ceed924717cf317fb61b4388e846236f0780a823b4a71"} Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.760160 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" event={"ID":"f960b1c3-0d01-4d4a-afe9-647ad835f4ba","Type":"ContainerStarted","Data":"72cc93ba63729667ce63a8a20662bfcfd725379b38851861b95903923f733c6a"} Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.760170 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" event={"ID":"f960b1c3-0d01-4d4a-afe9-647ad835f4ba","Type":"ContainerStarted","Data":"d35fd2e9199a2b7e817a6f323dc3c1aea6649fbc7cf192edb4f2f878f9130960"} Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.761732 4874 generic.go:334] "Generic (PLEG): container finished" podID="6ace4fc8-8ecc-4276-b682-fbd5a087fa45" containerID="31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4" exitCode=0 Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.761796 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" event={"ID":"6ace4fc8-8ecc-4276-b682-fbd5a087fa45","Type":"ContainerDied","Data":"31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4"} Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.761825 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" event={"ID":"6ace4fc8-8ecc-4276-b682-fbd5a087fa45","Type":"ContainerDied","Data":"0f08c99c4052a54d5f944376fcd422a57412049b5a38b2fd6cc96c531f8a6d6a"} Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.761844 4874 scope.go:117] "RemoveContainer" containerID="31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.761953 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.765997 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.766267 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" event={"ID":"66f020dd-a67e-42d0-8f03-a8c12dee1dbd","Type":"ContainerStarted","Data":"5ba69e1a7a55d4363dc53095cbb898d02707553667e9258edb901e43e19f88b2"} Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.778181 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" event={"ID":"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52","Type":"ContainerStarted","Data":"970522cb837c380c05e07a6831395d1b71447772c35bbe3b94af313ae4ef84df"} Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.810669 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4"] Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811190 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59zjl\" (UniqueName: \"kubernetes.io/projected/b65f997d-9930-4d62-881d-75ff90a7b2c0-kube-api-access-59zjl\") pod \"b65f997d-9930-4d62-881d-75ff90a7b2c0\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811342 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65f997d-9930-4d62-881d-75ff90a7b2c0-serving-cert\") pod \"b65f997d-9930-4d62-881d-75ff90a7b2c0\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811425 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-config\") pod \"b65f997d-9930-4d62-881d-75ff90a7b2c0\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811511 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-client-ca\") pod \"b65f997d-9930-4d62-881d-75ff90a7b2c0\" (UID: \"b65f997d-9930-4d62-881d-75ff90a7b2c0\") " Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811656 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811689 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811861 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr2gg\" (UniqueName: \"kubernetes.io/projected/bdd0cc15-b5bd-4703-8d69-5569eba61152-kube-api-access-kr2gg\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811881 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811901 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811918 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811936 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811957 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.811988 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812007 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812085 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812106 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/bdd0cc15-b5bd-4703-8d69-5569eba61152-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812127 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bdd0cc15-b5bd-4703-8d69-5569eba61152-config-out\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812147 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-config\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812195 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-web-config\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812230 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812269 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bdd0cc15-b5bd-4703-8d69-5569eba61152-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812289 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812389 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812400 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5gp8\" (UniqueName: \"kubernetes.io/projected/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-kube-api-access-v5gp8\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812430 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812439 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.812447 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ace4fc8-8ecc-4276-b682-fbd5a087fa45-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.814276 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7kcg4"] Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.814657 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "b65f997d-9930-4d62-881d-75ff90a7b2c0" (UID: "b65f997d-9930-4d62-881d-75ff90a7b2c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.814812 4874 scope.go:117] "RemoveContainer" containerID="31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.815041 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-config" (OuterVolumeSpecName: "config") pod "b65f997d-9930-4d62-881d-75ff90a7b2c0" (UID: "b65f997d-9930-4d62-881d-75ff90a7b2c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.817422 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: E0217 16:09:39.817790 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4\": container with ID starting with 31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4 not found: ID does not exist" containerID="31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.817822 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4"} err="failed to get container status \"31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4\": rpc error: code = NotFound desc = could not find container \"31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4\": container with ID starting with 31768878fd0082321369fbd898caaa4f2cb1b708482f6f70b496cdf7f57323a4 not found: ID does not exist" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.818221 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/bdd0cc15-b5bd-4703-8d69-5569eba61152-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.818296 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.821032 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.821145 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.821173 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bdd0cc15-b5bd-4703-8d69-5569eba61152-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.821204 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.821260 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65f997d-9930-4d62-881d-75ff90a7b2c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b65f997d-9930-4d62-881d-75ff90a7b2c0" (UID: "b65f997d-9930-4d62-881d-75ff90a7b2c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.821715 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-web-config\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.821890 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.822095 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bdd0cc15-b5bd-4703-8d69-5569eba61152-config-out\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.822307 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65f997d-9930-4d62-881d-75ff90a7b2c0-kube-api-access-59zjl" (OuterVolumeSpecName: "kube-api-access-59zjl") pod "b65f997d-9930-4d62-881d-75ff90a7b2c0" (UID: "b65f997d-9930-4d62-881d-75ff90a7b2c0"). InnerVolumeSpecName "kube-api-access-59zjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.822334 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.824769 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.827723 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.828350 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-config\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.829917 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.831015 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr2gg\" (UniqueName: \"kubernetes.io/projected/bdd0cc15-b5bd-4703-8d69-5569eba61152-kube-api-access-kr2gg\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.833448 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bdd0cc15-b5bd-4703-8d69-5569eba61152-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.836700 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bdd0cc15-b5bd-4703-8d69-5569eba61152-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"bdd0cc15-b5bd-4703-8d69-5569eba61152\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.913685 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65f997d-9930-4d62-881d-75ff90a7b2c0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.914030 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.914043 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65f997d-9930-4d62-881d-75ff90a7b2c0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.914054 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59zjl\" (UniqueName: \"kubernetes.io/projected/b65f997d-9930-4d62-881d-75ff90a7b2c0-kube-api-access-59zjl\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:39 crc kubenswrapper[4874]: I0217 16:09:39.993349 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.487795 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ace4fc8-8ecc-4276-b682-fbd5a087fa45" path="/var/lib/kubelet/pods/6ace4fc8-8ecc-4276-b682-fbd5a087fa45/volumes" Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.536003 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 16:09:40 crc kubenswrapper[4874]: W0217 16:09:40.673454 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdd0cc15_b5bd_4703_8d69_5569eba61152.slice/crio-818d1363046feb3ef8474eeb30c4781710f0d83ae4bea30f325c70a585f5e3fb WatchSource:0}: Error finding container 818d1363046feb3ef8474eeb30c4781710f0d83ae4bea30f325c70a585f5e3fb: Status 404 returned error can't find the container with id 818d1363046feb3ef8474eeb30c4781710f0d83ae4bea30f325c70a585f5e3fb Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.782773 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bdd0cc15-b5bd-4703-8d69-5569eba61152","Type":"ContainerStarted","Data":"818d1363046feb3ef8474eeb30c4781710f0d83ae4bea30f325c70a585f5e3fb"} Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.783847 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8687f49f77-skmg5" event={"ID":"fbb6b1c5-d8a4-491a-9f8d-58254190e96e","Type":"ContainerStarted","Data":"32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03"} Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.783873 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8687f49f77-skmg5" event={"ID":"fbb6b1c5-d8a4-491a-9f8d-58254190e96e","Type":"ContainerStarted","Data":"2c1eb2294c9f198005f04d89c72101175cc6afce8fe0174ecac1c0da629d5209"} Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.785259 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" event={"ID":"b65f997d-9930-4d62-881d-75ff90a7b2c0","Type":"ContainerDied","Data":"46e2986521f3aced9f055c9f9b8ff85b92737620af8476fc1811b5cc57824bce"} Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.785285 4874 scope.go:117] "RemoveContainer" containerID="e2a976409b4fd5757176ccdfdbed0b5d9717b18ddd506dee09fd834749515acd" Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.785332 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl" Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.822421 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8687f49f77-skmg5" podStartSLOduration=2.822405681 podStartE2EDuration="2.822405681s" podCreationTimestamp="2026-02-17 16:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:09:40.807482232 +0000 UTC m=+391.101870793" watchObservedRunningTime="2026-02-17 16:09:40.822405681 +0000 UTC m=+391.116794242" Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.822570 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl"] Feb 17 16:09:40 crc kubenswrapper[4874]: I0217 16:09:40.827183 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-jnpkl"] Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.182012 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb"] Feb 17 16:09:41 crc kubenswrapper[4874]: E0217 16:09:41.182403 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ace4fc8-8ecc-4276-b682-fbd5a087fa45" containerName="controller-manager" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.182422 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ace4fc8-8ecc-4276-b682-fbd5a087fa45" containerName="controller-manager" Feb 17 16:09:41 crc kubenswrapper[4874]: E0217 16:09:41.182438 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65f997d-9930-4d62-881d-75ff90a7b2c0" containerName="route-controller-manager" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.182449 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65f997d-9930-4d62-881d-75ff90a7b2c0" containerName="route-controller-manager" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.182640 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65f997d-9930-4d62-881d-75ff90a7b2c0" containerName="route-controller-manager" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.182663 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ace4fc8-8ecc-4276-b682-fbd5a087fa45" containerName="controller-manager" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.183257 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.190883 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.191141 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.191311 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.191695 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.192222 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.193141 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm"] Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.194289 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.197455 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.198462 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.198470 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.198556 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.198648 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.198652 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.206689 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb"] Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.207047 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.208358 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.215348 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm"] Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.286155 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89d92da0-7801-4b45-ac53-b0aff0c453d7-proxy-ca-bundles\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.286219 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d92da0-7801-4b45-ac53-b0aff0c453d7-serving-cert\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.286339 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d92da0-7801-4b45-ac53-b0aff0c453d7-config\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.286490 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftk6j\" (UniqueName: \"kubernetes.io/projected/89d92da0-7801-4b45-ac53-b0aff0c453d7-kube-api-access-ftk6j\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.286612 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89d92da0-7801-4b45-ac53-b0aff0c453d7-client-ca\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.387718 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftk6j\" (UniqueName: \"kubernetes.io/projected/89d92da0-7801-4b45-ac53-b0aff0c453d7-kube-api-access-ftk6j\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.387777 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc93dad-e2f1-4fab-b461-1f592d5673ee-client-ca\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.387828 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc93dad-e2f1-4fab-b461-1f592d5673ee-config\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.387859 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89d92da0-7801-4b45-ac53-b0aff0c453d7-client-ca\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.387921 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89d92da0-7801-4b45-ac53-b0aff0c453d7-proxy-ca-bundles\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.387947 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d92da0-7801-4b45-ac53-b0aff0c453d7-serving-cert\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.388016 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc93dad-e2f1-4fab-b461-1f592d5673ee-serving-cert\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.388048 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d92da0-7801-4b45-ac53-b0aff0c453d7-config\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.388069 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmdw\" (UniqueName: \"kubernetes.io/projected/4dc93dad-e2f1-4fab-b461-1f592d5673ee-kube-api-access-jqmdw\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.389148 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89d92da0-7801-4b45-ac53-b0aff0c453d7-client-ca\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.390170 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89d92da0-7801-4b45-ac53-b0aff0c453d7-proxy-ca-bundles\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.393010 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d92da0-7801-4b45-ac53-b0aff0c453d7-config\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.397796 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d92da0-7801-4b45-ac53-b0aff0c453d7-serving-cert\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.415223 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftk6j\" (UniqueName: \"kubernetes.io/projected/89d92da0-7801-4b45-ac53-b0aff0c453d7-kube-api-access-ftk6j\") pod \"controller-manager-5ff64f58b7-rgkxb\" (UID: \"89d92da0-7801-4b45-ac53-b0aff0c453d7\") " pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.489279 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc93dad-e2f1-4fab-b461-1f592d5673ee-client-ca\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.489388 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc93dad-e2f1-4fab-b461-1f592d5673ee-config\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.489525 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc93dad-e2f1-4fab-b461-1f592d5673ee-serving-cert\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.489574 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqmdw\" (UniqueName: \"kubernetes.io/projected/4dc93dad-e2f1-4fab-b461-1f592d5673ee-kube-api-access-jqmdw\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.491177 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4dc93dad-e2f1-4fab-b461-1f592d5673ee-client-ca\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.493352 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc93dad-e2f1-4fab-b461-1f592d5673ee-config\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.505468 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.506539 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dc93dad-e2f1-4fab-b461-1f592d5673ee-serving-cert\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.509830 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqmdw\" (UniqueName: \"kubernetes.io/projected/4dc93dad-e2f1-4fab-b461-1f592d5673ee-kube-api-access-jqmdw\") pod \"route-controller-manager-7b67fb5fbf-qlczm\" (UID: \"4dc93dad-e2f1-4fab-b461-1f592d5673ee\") " pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:41 crc kubenswrapper[4874]: I0217 16:09:41.512881 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:42 crc kubenswrapper[4874]: I0217 16:09:42.084414 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mkpfd" Feb 17 16:09:42 crc kubenswrapper[4874]: I0217 16:09:42.142068 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l5nms"] Feb 17 16:09:42 crc kubenswrapper[4874]: I0217 16:09:42.464175 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65f997d-9930-4d62-881d-75ff90a7b2c0" path="/var/lib/kubelet/pods/b65f997d-9930-4d62-881d-75ff90a7b2c0/volumes" Feb 17 16:09:42 crc kubenswrapper[4874]: I0217 16:09:42.465540 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm"] Feb 17 16:09:42 crc kubenswrapper[4874]: I0217 16:09:42.594394 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb"] Feb 17 16:09:42 crc kubenswrapper[4874]: W0217 16:09:42.684283 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dc93dad_e2f1_4fab_b461_1f592d5673ee.slice/crio-d51f4dac0a30ffca71c87612c456f8b30df3aa129f58fc387e1ba4656a4dc305 WatchSource:0}: Error finding container d51f4dac0a30ffca71c87612c456f8b30df3aa129f58fc387e1ba4656a4dc305: Status 404 returned error can't find the container with id d51f4dac0a30ffca71c87612c456f8b30df3aa129f58fc387e1ba4656a4dc305 Feb 17 16:09:42 crc kubenswrapper[4874]: I0217 16:09:42.804154 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" event={"ID":"89d92da0-7801-4b45-ac53-b0aff0c453d7","Type":"ContainerStarted","Data":"702a7c82eb9735c266fc40006ed779d372b2b46a3122c840da7ba2b8dc2ffd93"} Feb 17 16:09:42 crc kubenswrapper[4874]: I0217 16:09:42.804986 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" event={"ID":"4dc93dad-e2f1-4fab-b461-1f592d5673ee","Type":"ContainerStarted","Data":"d51f4dac0a30ffca71c87612c456f8b30df3aa129f58fc387e1ba4656a4dc305"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.812982 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" event={"ID":"3a216c9c-8f23-46b4-b6f8-b1f24c73ed52","Type":"ContainerStarted","Data":"8e7101af2bb9227ada44456fa590c6f58f3c187003337ffce755909ab8cd25eb"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.814840 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" event={"ID":"4dc93dad-e2f1-4fab-b461-1f592d5673ee","Type":"ContainerStarted","Data":"f0ce16397cd3cb9e589aaff4c5050643435ec2035d470e9fb1a374c7736f9ec7"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.815259 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.816740 4874 generic.go:334] "Generic (PLEG): container finished" podID="bdd0cc15-b5bd-4703-8d69-5569eba61152" containerID="1dec0e3eafbb609e5106ec8f31140da094168f08aa13f468cfbefdbda518be13" exitCode=0 Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.816818 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bdd0cc15-b5bd-4703-8d69-5569eba61152","Type":"ContainerDied","Data":"1dec0e3eafbb609e5106ec8f31140da094168f08aa13f468cfbefdbda518be13"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.819731 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.820045 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" event={"ID":"89d92da0-7801-4b45-ac53-b0aff0c453d7","Type":"ContainerStarted","Data":"09348051a28a31fc06a53c66f5115d902db6537d0339293ea85fb46431c2ccb5"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.820363 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.825237 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" event={"ID":"f960b1c3-0d01-4d4a-afe9-647ad835f4ba","Type":"ContainerStarted","Data":"d5b4cf19615b288bff276f78210ca92f628a9853879c525f24d1643573c83d8f"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.825264 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" event={"ID":"f960b1c3-0d01-4d4a-afe9-647ad835f4ba","Type":"ContainerStarted","Data":"f56b235209cab1f980cc79707a364584e83f7b30a1ad21885806d1d691a5a25f"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.825274 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" event={"ID":"f960b1c3-0d01-4d4a-afe9-647ad835f4ba","Type":"ContainerStarted","Data":"fbfaf9bf7f57dbd49bbba183cd7e8609a4b3279631a6b368ffabcc4de0dc2d53"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.826979 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.827041 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.834875 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"137ce3c3-521e-4df1-8294-7900b32e2886","Type":"ContainerStarted","Data":"2f66078f7a29999db2147dd36f6bf524f4685bb144f46663583b6920186e3f5c"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.834964 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"137ce3c3-521e-4df1-8294-7900b32e2886","Type":"ContainerStarted","Data":"ef264827ca2ca68b86ea904476ce4ca64f2a1180eb84c81543d9f8eb7ed695af"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.834991 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"137ce3c3-521e-4df1-8294-7900b32e2886","Type":"ContainerStarted","Data":"1716a03f1851c12090a8339e27b8f3f6154e7a1377728d23ef6ce02a096217d0"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.835010 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"137ce3c3-521e-4df1-8294-7900b32e2886","Type":"ContainerStarted","Data":"99c6e15d8ea7cce466e030fcddf8d1f26559f0f056f032da43acb8d5b9f9b4b2"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.835025 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"137ce3c3-521e-4df1-8294-7900b32e2886","Type":"ContainerStarted","Data":"dd7723ef33e658d558476150016fec9e732a3137c73463fbe358ad54aac35fdc"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.835043 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"137ce3c3-521e-4df1-8294-7900b32e2886","Type":"ContainerStarted","Data":"2ab0474b895473603c546c4795eea5aa50c8e499633a535e21a0b95728b0b6c6"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.836519 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" event={"ID":"66f020dd-a67e-42d0-8f03-a8c12dee1dbd","Type":"ContainerStarted","Data":"d29768e26e19cb61a0b3c7fc2b7f496c7ae94549f9f2d809f8aff4cefa901660"} Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.836930 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.839676 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" podStartSLOduration=2.244859856 podStartE2EDuration="5.839660617s" podCreationTimestamp="2026-02-17 16:09:38 +0000 UTC" firstStartedPulling="2026-02-17 16:09:39.096698168 +0000 UTC m=+389.391086729" lastFinishedPulling="2026-02-17 16:09:42.691498919 +0000 UTC m=+392.985887490" observedRunningTime="2026-02-17 16:09:43.834282201 +0000 UTC m=+394.128670762" watchObservedRunningTime="2026-02-17 16:09:43.839660617 +0000 UTC m=+394.134049188" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.846966 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.847160 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.862908 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b67fb5fbf-qlczm" podStartSLOduration=4.862887678 podStartE2EDuration="4.862887678s" podCreationTimestamp="2026-02-17 16:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:09:43.858321972 +0000 UTC m=+394.152710573" watchObservedRunningTime="2026-02-17 16:09:43.862887678 +0000 UTC m=+394.157276259" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.892672 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-8648d6cb6d-f2ztn" podStartSLOduration=3.717718846 podStartE2EDuration="9.892650045s" podCreationTimestamp="2026-02-17 16:09:34 +0000 UTC" firstStartedPulling="2026-02-17 16:09:36.012238631 +0000 UTC m=+386.306627192" lastFinishedPulling="2026-02-17 16:09:42.18716983 +0000 UTC m=+392.481558391" observedRunningTime="2026-02-17 16:09:43.891676801 +0000 UTC m=+394.186065452" watchObservedRunningTime="2026-02-17 16:09:43.892650045 +0000 UTC m=+394.187038636" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.953262 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5ff64f58b7-rgkxb" podStartSLOduration=4.953237667 podStartE2EDuration="4.953237667s" podCreationTimestamp="2026-02-17 16:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:09:43.949328287 +0000 UTC m=+394.243716878" watchObservedRunningTime="2026-02-17 16:09:43.953237667 +0000 UTC m=+394.247626238" Feb 17 16:09:43 crc kubenswrapper[4874]: I0217 16:09:43.975669 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-694c74f7bf-dr4fv" podStartSLOduration=2.792905443 podStartE2EDuration="5.97498156s" podCreationTimestamp="2026-02-17 16:09:38 +0000 UTC" firstStartedPulling="2026-02-17 16:09:39.576651022 +0000 UTC m=+389.871039583" lastFinishedPulling="2026-02-17 16:09:42.758727129 +0000 UTC m=+393.053115700" observedRunningTime="2026-02-17 16:09:43.971778658 +0000 UTC m=+394.266167229" watchObservedRunningTime="2026-02-17 16:09:43.97498156 +0000 UTC m=+394.269370161" Feb 17 16:09:44 crc kubenswrapper[4874]: I0217 16:09:44.005287 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.810722583 podStartE2EDuration="11.00526621s" podCreationTimestamp="2026-02-17 16:09:33 +0000 UTC" firstStartedPulling="2026-02-17 16:09:35.005519301 +0000 UTC m=+385.299907862" lastFinishedPulling="2026-02-17 16:09:42.200062928 +0000 UTC m=+392.494451489" observedRunningTime="2026-02-17 16:09:43.998874038 +0000 UTC m=+394.293262599" watchObservedRunningTime="2026-02-17 16:09:44.00526621 +0000 UTC m=+394.299654791" Feb 17 16:09:47 crc kubenswrapper[4874]: I0217 16:09:47.867707 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bdd0cc15-b5bd-4703-8d69-5569eba61152","Type":"ContainerStarted","Data":"bba6ddf56f68b7b034afa9d1cb5aa8dac5be91c231a0a0b85a93f2f25fff2972"} Feb 17 16:09:47 crc kubenswrapper[4874]: I0217 16:09:47.867957 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bdd0cc15-b5bd-4703-8d69-5569eba61152","Type":"ContainerStarted","Data":"6053b9ea9feb0c891bb349fa6dee419b2da5ddade6b5d293aff95931dd558bcf"} Feb 17 16:09:48 crc kubenswrapper[4874]: I0217 16:09:48.882454 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bdd0cc15-b5bd-4703-8d69-5569eba61152","Type":"ContainerStarted","Data":"5c9a0541da94f81e054b1869df6e175667ea571fb7880cd211a508d2cb05bf76"} Feb 17 16:09:48 crc kubenswrapper[4874]: I0217 16:09:48.883673 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bdd0cc15-b5bd-4703-8d69-5569eba61152","Type":"ContainerStarted","Data":"86daa0077037885119e23c545f5fde83018f48e9d442ce8b14db12b013e1beb7"} Feb 17 16:09:48 crc kubenswrapper[4874]: I0217 16:09:48.883863 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bdd0cc15-b5bd-4703-8d69-5569eba61152","Type":"ContainerStarted","Data":"3ce4a4add71c415dee72ca73e11158ccf4c34faee3751138b6bb27b926661a42"} Feb 17 16:09:48 crc kubenswrapper[4874]: I0217 16:09:48.884033 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bdd0cc15-b5bd-4703-8d69-5569eba61152","Type":"ContainerStarted","Data":"aa5f4ed3f72d302c1c042487b9c04cbcb7b65dc68311ecf8c99d5bac756a0958"} Feb 17 16:09:48 crc kubenswrapper[4874]: I0217 16:09:48.943042 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=6.267030228 podStartE2EDuration="9.94301519s" podCreationTimestamp="2026-02-17 16:09:39 +0000 UTC" firstStartedPulling="2026-02-17 16:09:43.818289634 +0000 UTC m=+394.112678195" lastFinishedPulling="2026-02-17 16:09:47.494274596 +0000 UTC m=+397.788663157" observedRunningTime="2026-02-17 16:09:48.932214935 +0000 UTC m=+399.226603556" watchObservedRunningTime="2026-02-17 16:09:48.94301519 +0000 UTC m=+399.237403791" Feb 17 16:09:49 crc kubenswrapper[4874]: I0217 16:09:49.286678 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:49 crc kubenswrapper[4874]: I0217 16:09:49.286791 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:49 crc kubenswrapper[4874]: I0217 16:09:49.294950 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:49 crc kubenswrapper[4874]: I0217 16:09:49.897717 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:09:49 crc kubenswrapper[4874]: I0217 16:09:49.981104 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6wpw5"] Feb 17 16:09:49 crc kubenswrapper[4874]: I0217 16:09:49.993829 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:09:57 crc kubenswrapper[4874]: I0217 16:09:57.725232 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:57 crc kubenswrapper[4874]: I0217 16:09:57.726170 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:58 crc kubenswrapper[4874]: I0217 16:09:58.569621 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:09:58 crc kubenswrapper[4874]: I0217 16:09:58.572332 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.182785 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" podUID="bbe005ea-f697-473a-8578-91453c7a8331" containerName="registry" containerID="cri-o://a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1" gracePeriod=30 Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.663731 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.710922 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-registry-tls\") pod \"bbe005ea-f697-473a-8578-91453c7a8331\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.711005 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-trusted-ca\") pod \"bbe005ea-f697-473a-8578-91453c7a8331\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.711046 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-registry-certificates\") pod \"bbe005ea-f697-473a-8578-91453c7a8331\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.711118 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbe005ea-f697-473a-8578-91453c7a8331-installation-pull-secrets\") pod \"bbe005ea-f697-473a-8578-91453c7a8331\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.711226 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbe005ea-f697-473a-8578-91453c7a8331-ca-trust-extracted\") pod \"bbe005ea-f697-473a-8578-91453c7a8331\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.711306 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-bound-sa-token\") pod \"bbe005ea-f697-473a-8578-91453c7a8331\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.711338 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pknpq\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-kube-api-access-pknpq\") pod \"bbe005ea-f697-473a-8578-91453c7a8331\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.711530 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bbe005ea-f697-473a-8578-91453c7a8331\" (UID: \"bbe005ea-f697-473a-8578-91453c7a8331\") " Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.714579 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bbe005ea-f697-473a-8578-91453c7a8331" (UID: "bbe005ea-f697-473a-8578-91453c7a8331"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.715274 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bbe005ea-f697-473a-8578-91453c7a8331" (UID: "bbe005ea-f697-473a-8578-91453c7a8331"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.719968 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-kube-api-access-pknpq" (OuterVolumeSpecName: "kube-api-access-pknpq") pod "bbe005ea-f697-473a-8578-91453c7a8331" (UID: "bbe005ea-f697-473a-8578-91453c7a8331"). InnerVolumeSpecName "kube-api-access-pknpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.723160 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe005ea-f697-473a-8578-91453c7a8331-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bbe005ea-f697-473a-8578-91453c7a8331" (UID: "bbe005ea-f697-473a-8578-91453c7a8331"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.727760 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bbe005ea-f697-473a-8578-91453c7a8331" (UID: "bbe005ea-f697-473a-8578-91453c7a8331"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.728638 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bbe005ea-f697-473a-8578-91453c7a8331" (UID: "bbe005ea-f697-473a-8578-91453c7a8331"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.732532 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bbe005ea-f697-473a-8578-91453c7a8331" (UID: "bbe005ea-f697-473a-8578-91453c7a8331"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.748513 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe005ea-f697-473a-8578-91453c7a8331-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bbe005ea-f697-473a-8578-91453c7a8331" (UID: "bbe005ea-f697-473a-8578-91453c7a8331"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.813932 4874 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.813978 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.813997 4874 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbe005ea-f697-473a-8578-91453c7a8331-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.814017 4874 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbe005ea-f697-473a-8578-91453c7a8331-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.814034 4874 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbe005ea-f697-473a-8578-91453c7a8331-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.814052 4874 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4874]: I0217 16:10:07.814067 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pknpq\" (UniqueName: \"kubernetes.io/projected/bbe005ea-f697-473a-8578-91453c7a8331-kube-api-access-pknpq\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.031576 4874 generic.go:334] "Generic (PLEG): container finished" podID="bbe005ea-f697-473a-8578-91453c7a8331" containerID="a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1" exitCode=0 Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.031664 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" event={"ID":"bbe005ea-f697-473a-8578-91453c7a8331","Type":"ContainerDied","Data":"a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1"} Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.031719 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" event={"ID":"bbe005ea-f697-473a-8578-91453c7a8331","Type":"ContainerDied","Data":"16e8c070d81d29de4a961b9aecf071ac8819bbb9b53739a682a65f15828295b4"} Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.031757 4874 scope.go:117] "RemoveContainer" containerID="a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1" Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.032169 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l5nms" Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.059870 4874 scope.go:117] "RemoveContainer" containerID="a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1" Feb 17 16:10:08 crc kubenswrapper[4874]: E0217 16:10:08.063072 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1\": container with ID starting with a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1 not found: ID does not exist" containerID="a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1" Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.063207 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1"} err="failed to get container status \"a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1\": rpc error: code = NotFound desc = could not find container \"a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1\": container with ID starting with a3732f7d252e894677a57aa3aa829c337e1801d023eec01db21f17c45cae52b1 not found: ID does not exist" Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.074769 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l5nms"] Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.083616 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l5nms"] Feb 17 16:10:08 crc kubenswrapper[4874]: I0217 16:10:08.471250 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe005ea-f697-473a-8578-91453c7a8331" path="/var/lib/kubelet/pods/bbe005ea-f697-473a-8578-91453c7a8331/volumes" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.047668 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6wpw5" podUID="cfccd2a3-037d-4b17-a269-952847ad533a" containerName="console" containerID="cri-o://0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400" gracePeriod=15 Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.600369 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6wpw5_cfccd2a3-037d-4b17-a269-952847ad533a/console/0.log" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.600717 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.631290 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-oauth-serving-cert\") pod \"cfccd2a3-037d-4b17-a269-952847ad533a\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.631409 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-service-ca\") pod \"cfccd2a3-037d-4b17-a269-952847ad533a\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.631481 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cghk\" (UniqueName: \"kubernetes.io/projected/cfccd2a3-037d-4b17-a269-952847ad533a-kube-api-access-7cghk\") pod \"cfccd2a3-037d-4b17-a269-952847ad533a\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.631514 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-console-config\") pod \"cfccd2a3-037d-4b17-a269-952847ad533a\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.631534 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-serving-cert\") pod \"cfccd2a3-037d-4b17-a269-952847ad533a\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.631571 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-trusted-ca-bundle\") pod \"cfccd2a3-037d-4b17-a269-952847ad533a\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.631606 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-oauth-config\") pod \"cfccd2a3-037d-4b17-a269-952847ad533a\" (UID: \"cfccd2a3-037d-4b17-a269-952847ad533a\") " Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.633309 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-service-ca" (OuterVolumeSpecName: "service-ca") pod "cfccd2a3-037d-4b17-a269-952847ad533a" (UID: "cfccd2a3-037d-4b17-a269-952847ad533a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.633794 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cfccd2a3-037d-4b17-a269-952847ad533a" (UID: "cfccd2a3-037d-4b17-a269-952847ad533a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.634337 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-console-config" (OuterVolumeSpecName: "console-config") pod "cfccd2a3-037d-4b17-a269-952847ad533a" (UID: "cfccd2a3-037d-4b17-a269-952847ad533a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.634802 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cfccd2a3-037d-4b17-a269-952847ad533a" (UID: "cfccd2a3-037d-4b17-a269-952847ad533a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.698500 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cfccd2a3-037d-4b17-a269-952847ad533a" (UID: "cfccd2a3-037d-4b17-a269-952847ad533a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.701653 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cfccd2a3-037d-4b17-a269-952847ad533a" (UID: "cfccd2a3-037d-4b17-a269-952847ad533a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.701896 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfccd2a3-037d-4b17-a269-952847ad533a-kube-api-access-7cghk" (OuterVolumeSpecName: "kube-api-access-7cghk") pod "cfccd2a3-037d-4b17-a269-952847ad533a" (UID: "cfccd2a3-037d-4b17-a269-952847ad533a"). InnerVolumeSpecName "kube-api-access-7cghk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.732733 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cghk\" (UniqueName: \"kubernetes.io/projected/cfccd2a3-037d-4b17-a269-952847ad533a-kube-api-access-7cghk\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.732769 4874 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.732780 4874 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.732789 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.732799 4874 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cfccd2a3-037d-4b17-a269-952847ad533a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.732808 4874 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:15 crc kubenswrapper[4874]: I0217 16:10:15.732820 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cfccd2a3-037d-4b17-a269-952847ad533a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.094399 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6wpw5_cfccd2a3-037d-4b17-a269-952847ad533a/console/0.log" Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.094474 4874 generic.go:334] "Generic (PLEG): container finished" podID="cfccd2a3-037d-4b17-a269-952847ad533a" containerID="0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400" exitCode=2 Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.094513 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6wpw5" event={"ID":"cfccd2a3-037d-4b17-a269-952847ad533a","Type":"ContainerDied","Data":"0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400"} Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.094561 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6wpw5" event={"ID":"cfccd2a3-037d-4b17-a269-952847ad533a","Type":"ContainerDied","Data":"b712a4af892e6dc88bb6d9dcc810f0834863fb50a9db5bb02a5dc2cb196b5096"} Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.094560 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6wpw5" Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.094582 4874 scope.go:117] "RemoveContainer" containerID="0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400" Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.110923 4874 scope.go:117] "RemoveContainer" containerID="0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400" Feb 17 16:10:16 crc kubenswrapper[4874]: E0217 16:10:16.111376 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400\": container with ID starting with 0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400 not found: ID does not exist" containerID="0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400" Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.111417 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400"} err="failed to get container status \"0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400\": rpc error: code = NotFound desc = could not find container \"0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400\": container with ID starting with 0f00c3811a42ffc5d137518f5a56fd28c6406f9b90b8fa3156b611ace4ca9400 not found: ID does not exist" Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.132662 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6wpw5"] Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.136241 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6wpw5"] Feb 17 16:10:16 crc kubenswrapper[4874]: I0217 16:10:16.473240 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfccd2a3-037d-4b17-a269-952847ad533a" path="/var/lib/kubelet/pods/cfccd2a3-037d-4b17-a269-952847ad533a/volumes" Feb 17 16:10:18 crc kubenswrapper[4874]: I0217 16:10:18.574745 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:10:18 crc kubenswrapper[4874]: I0217 16:10:18.578382 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-69f8c984c7-wtsdw" Feb 17 16:10:27 crc kubenswrapper[4874]: I0217 16:10:27.727626 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:10:27 crc kubenswrapper[4874]: I0217 16:10:27.728434 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:10:27 crc kubenswrapper[4874]: I0217 16:10:27.728501 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:10:27 crc kubenswrapper[4874]: I0217 16:10:27.729397 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bcc2c8959b8db77cff7d7da089aecb1677ca5a83553a4d055beb1eed3bde2fdd"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:10:27 crc kubenswrapper[4874]: I0217 16:10:27.729498 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://bcc2c8959b8db77cff7d7da089aecb1677ca5a83553a4d055beb1eed3bde2fdd" gracePeriod=600 Feb 17 16:10:28 crc kubenswrapper[4874]: I0217 16:10:28.547382 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="bcc2c8959b8db77cff7d7da089aecb1677ca5a83553a4d055beb1eed3bde2fdd" exitCode=0 Feb 17 16:10:28 crc kubenswrapper[4874]: I0217 16:10:28.547492 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"bcc2c8959b8db77cff7d7da089aecb1677ca5a83553a4d055beb1eed3bde2fdd"} Feb 17 16:10:28 crc kubenswrapper[4874]: I0217 16:10:28.548175 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"5c051c0004b244e4c0ce127d058c5599bf72e06e8786ebe01293d1051eeff494"} Feb 17 16:10:28 crc kubenswrapper[4874]: I0217 16:10:28.548265 4874 scope.go:117] "RemoveContainer" containerID="5a56172a01d8a118c7b8ed0bfcef586738d47583c8b6127b67e6f4aaedeba141" Feb 17 16:10:39 crc kubenswrapper[4874]: I0217 16:10:39.994383 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:10:40 crc kubenswrapper[4874]: I0217 16:10:40.033802 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:10:40 crc kubenswrapper[4874]: I0217 16:10:40.667486 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:10:56 crc kubenswrapper[4874]: I0217 16:10:56.939708 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65c4f977c4-rpvsb"] Feb 17 16:10:56 crc kubenswrapper[4874]: E0217 16:10:56.940574 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfccd2a3-037d-4b17-a269-952847ad533a" containerName="console" Feb 17 16:10:56 crc kubenswrapper[4874]: I0217 16:10:56.940595 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfccd2a3-037d-4b17-a269-952847ad533a" containerName="console" Feb 17 16:10:56 crc kubenswrapper[4874]: E0217 16:10:56.940628 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe005ea-f697-473a-8578-91453c7a8331" containerName="registry" Feb 17 16:10:56 crc kubenswrapper[4874]: I0217 16:10:56.940640 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe005ea-f697-473a-8578-91453c7a8331" containerName="registry" Feb 17 16:10:56 crc kubenswrapper[4874]: I0217 16:10:56.940881 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe005ea-f697-473a-8578-91453c7a8331" containerName="registry" Feb 17 16:10:56 crc kubenswrapper[4874]: I0217 16:10:56.940910 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfccd2a3-037d-4b17-a269-952847ad533a" containerName="console" Feb 17 16:10:56 crc kubenswrapper[4874]: I0217 16:10:56.941764 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:56 crc kubenswrapper[4874]: I0217 16:10:56.953114 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65c4f977c4-rpvsb"] Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.118540 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-trusted-ca-bundle\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.118865 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-oauth-config\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.118900 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-serving-cert\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.118927 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-console-config\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.118961 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gbs\" (UniqueName: \"kubernetes.io/projected/07fc5262-d078-4ff8-aa96-460615fbd47d-kube-api-access-z2gbs\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.119157 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-service-ca\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.119238 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-oauth-serving-cert\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.220519 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-trusted-ca-bundle\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.220635 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-oauth-config\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.220703 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-serving-cert\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.220755 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-console-config\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.220808 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2gbs\" (UniqueName: \"kubernetes.io/projected/07fc5262-d078-4ff8-aa96-460615fbd47d-kube-api-access-z2gbs\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.220957 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-service-ca\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.221573 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-oauth-serving-cert\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.222043 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-trusted-ca-bundle\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.222494 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-service-ca\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.222849 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-oauth-serving-cert\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.224202 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-console-config\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.228071 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-oauth-config\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.235595 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-serving-cert\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.239196 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2gbs\" (UniqueName: \"kubernetes.io/projected/07fc5262-d078-4ff8-aa96-460615fbd47d-kube-api-access-z2gbs\") pod \"console-65c4f977c4-rpvsb\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.299794 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:10:57 crc kubenswrapper[4874]: I0217 16:10:57.736423 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65c4f977c4-rpvsb"] Feb 17 16:10:57 crc kubenswrapper[4874]: W0217 16:10:57.775823 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07fc5262_d078_4ff8_aa96_460615fbd47d.slice/crio-540cc54c2a74349e3a5570035ff79d42f582720788f03aea43295f89eb6d03a2 WatchSource:0}: Error finding container 540cc54c2a74349e3a5570035ff79d42f582720788f03aea43295f89eb6d03a2: Status 404 returned error can't find the container with id 540cc54c2a74349e3a5570035ff79d42f582720788f03aea43295f89eb6d03a2 Feb 17 16:10:58 crc kubenswrapper[4874]: I0217 16:10:58.773508 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65c4f977c4-rpvsb" event={"ID":"07fc5262-d078-4ff8-aa96-460615fbd47d","Type":"ContainerStarted","Data":"eaeb4006cf5dbd34f13e3d518abca994f130b48a5bae2789ca84822437be86ec"} Feb 17 16:10:58 crc kubenswrapper[4874]: I0217 16:10:58.773946 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65c4f977c4-rpvsb" event={"ID":"07fc5262-d078-4ff8-aa96-460615fbd47d","Type":"ContainerStarted","Data":"540cc54c2a74349e3a5570035ff79d42f582720788f03aea43295f89eb6d03a2"} Feb 17 16:10:58 crc kubenswrapper[4874]: I0217 16:10:58.802599 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65c4f977c4-rpvsb" podStartSLOduration=2.802571642 podStartE2EDuration="2.802571642s" podCreationTimestamp="2026-02-17 16:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:10:58.798378358 +0000 UTC m=+469.092766969" watchObservedRunningTime="2026-02-17 16:10:58.802571642 +0000 UTC m=+469.096960243" Feb 17 16:11:07 crc kubenswrapper[4874]: I0217 16:11:07.300336 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:11:07 crc kubenswrapper[4874]: I0217 16:11:07.300996 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:11:07 crc kubenswrapper[4874]: I0217 16:11:07.308395 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:11:07 crc kubenswrapper[4874]: I0217 16:11:07.852684 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:11:07 crc kubenswrapper[4874]: I0217 16:11:07.941664 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8687f49f77-skmg5"] Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.000401 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-8687f49f77-skmg5" podUID="fbb6b1c5-d8a4-491a-9f8d-58254190e96e" containerName="console" containerID="cri-o://32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03" gracePeriod=15 Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.692848 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8687f49f77-skmg5_fbb6b1c5-d8a4-491a-9f8d-58254190e96e/console/0.log" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.693160 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.808387 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-oauth-config\") pod \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.808452 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdx2j\" (UniqueName: \"kubernetes.io/projected/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-kube-api-access-fdx2j\") pod \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.808490 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-service-ca\") pod \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.808546 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-trusted-ca-bundle\") pod \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.808605 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-config\") pod \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.808624 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-serving-cert\") pod \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.808649 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-oauth-serving-cert\") pod \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\" (UID: \"fbb6b1c5-d8a4-491a-9f8d-58254190e96e\") " Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.809514 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-service-ca" (OuterVolumeSpecName: "service-ca") pod "fbb6b1c5-d8a4-491a-9f8d-58254190e96e" (UID: "fbb6b1c5-d8a4-491a-9f8d-58254190e96e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.809772 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-config" (OuterVolumeSpecName: "console-config") pod "fbb6b1c5-d8a4-491a-9f8d-58254190e96e" (UID: "fbb6b1c5-d8a4-491a-9f8d-58254190e96e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.809806 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fbb6b1c5-d8a4-491a-9f8d-58254190e96e" (UID: "fbb6b1c5-d8a4-491a-9f8d-58254190e96e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.809855 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fbb6b1c5-d8a4-491a-9f8d-58254190e96e" (UID: "fbb6b1c5-d8a4-491a-9f8d-58254190e96e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.822097 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fbb6b1c5-d8a4-491a-9f8d-58254190e96e" (UID: "fbb6b1c5-d8a4-491a-9f8d-58254190e96e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.822247 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fbb6b1c5-d8a4-491a-9f8d-58254190e96e" (UID: "fbb6b1c5-d8a4-491a-9f8d-58254190e96e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.822932 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-kube-api-access-fdx2j" (OuterVolumeSpecName: "kube-api-access-fdx2j") pod "fbb6b1c5-d8a4-491a-9f8d-58254190e96e" (UID: "fbb6b1c5-d8a4-491a-9f8d-58254190e96e"). InnerVolumeSpecName "kube-api-access-fdx2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.910031 4874 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.910103 4874 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.910125 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdx2j\" (UniqueName: \"kubernetes.io/projected/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-kube-api-access-fdx2j\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.910145 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.910164 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.910180 4874 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:33 crc kubenswrapper[4874]: I0217 16:11:33.910197 4874 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fbb6b1c5-d8a4-491a-9f8d-58254190e96e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.018647 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8687f49f77-skmg5_fbb6b1c5-d8a4-491a-9f8d-58254190e96e/console/0.log" Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.018739 4874 generic.go:334] "Generic (PLEG): container finished" podID="fbb6b1c5-d8a4-491a-9f8d-58254190e96e" containerID="32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03" exitCode=2 Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.018792 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8687f49f77-skmg5" event={"ID":"fbb6b1c5-d8a4-491a-9f8d-58254190e96e","Type":"ContainerDied","Data":"32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03"} Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.018851 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8687f49f77-skmg5" event={"ID":"fbb6b1c5-d8a4-491a-9f8d-58254190e96e","Type":"ContainerDied","Data":"2c1eb2294c9f198005f04d89c72101175cc6afce8fe0174ecac1c0da629d5209"} Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.018877 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8687f49f77-skmg5" Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.018975 4874 scope.go:117] "RemoveContainer" containerID="32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03" Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.053361 4874 scope.go:117] "RemoveContainer" containerID="32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03" Feb 17 16:11:34 crc kubenswrapper[4874]: E0217 16:11:34.053951 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03\": container with ID starting with 32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03 not found: ID does not exist" containerID="32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03" Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.054022 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03"} err="failed to get container status \"32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03\": rpc error: code = NotFound desc = could not find container \"32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03\": container with ID starting with 32ce57523cf81fc5534dc6f326eca88c5162bd0b6cb30cb11f8d1e2dc3579a03 not found: ID does not exist" Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.072530 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8687f49f77-skmg5"] Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.078597 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-8687f49f77-skmg5"] Feb 17 16:11:34 crc kubenswrapper[4874]: I0217 16:11:34.469617 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb6b1c5-d8a4-491a-9f8d-58254190e96e" path="/var/lib/kubelet/pods/fbb6b1c5-d8a4-491a-9f8d-58254190e96e/volumes" Feb 17 16:12:27 crc kubenswrapper[4874]: I0217 16:12:27.724272 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:27 crc kubenswrapper[4874]: I0217 16:12:27.725054 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.494860 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5"] Feb 17 16:12:40 crc kubenswrapper[4874]: E0217 16:12:40.495593 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb6b1c5-d8a4-491a-9f8d-58254190e96e" containerName="console" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.495604 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb6b1c5-d8a4-491a-9f8d-58254190e96e" containerName="console" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.495703 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb6b1c5-d8a4-491a-9f8d-58254190e96e" containerName="console" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.496430 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.497898 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.505576 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5"] Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.672442 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvbt8\" (UniqueName: \"kubernetes.io/projected/da9d156e-7c39-4ea0-80a3-3046c65ec615-kube-api-access-gvbt8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.672709 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.672826 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.774893 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.775266 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvbt8\" (UniqueName: \"kubernetes.io/projected/da9d156e-7c39-4ea0-80a3-3046c65ec615-kube-api-access-gvbt8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.775381 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.775634 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.776018 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.806495 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvbt8\" (UniqueName: \"kubernetes.io/projected/da9d156e-7c39-4ea0-80a3-3046c65ec615-kube-api-access-gvbt8\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:40 crc kubenswrapper[4874]: I0217 16:12:40.816269 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:41 crc kubenswrapper[4874]: I0217 16:12:41.252827 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5"] Feb 17 16:12:41 crc kubenswrapper[4874]: I0217 16:12:41.481934 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" event={"ID":"da9d156e-7c39-4ea0-80a3-3046c65ec615","Type":"ContainerStarted","Data":"2eeb45397aa1a8519f108fdece182d5c1d0045ee1e4f64c88099060d6c2e2763"} Feb 17 16:12:42 crc kubenswrapper[4874]: I0217 16:12:42.491665 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" event={"ID":"da9d156e-7c39-4ea0-80a3-3046c65ec615","Type":"ContainerStarted","Data":"492c8f6bc5dbec3bd4bf3239d5fd118365813a659ea34dca9b0b7d55907d09bd"} Feb 17 16:12:43 crc kubenswrapper[4874]: I0217 16:12:43.497156 4874 generic.go:334] "Generic (PLEG): container finished" podID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerID="492c8f6bc5dbec3bd4bf3239d5fd118365813a659ea34dca9b0b7d55907d09bd" exitCode=0 Feb 17 16:12:43 crc kubenswrapper[4874]: I0217 16:12:43.497206 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" event={"ID":"da9d156e-7c39-4ea0-80a3-3046c65ec615","Type":"ContainerDied","Data":"492c8f6bc5dbec3bd4bf3239d5fd118365813a659ea34dca9b0b7d55907d09bd"} Feb 17 16:12:43 crc kubenswrapper[4874]: I0217 16:12:43.500032 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:12:49 crc kubenswrapper[4874]: I0217 16:12:49.550821 4874 generic.go:334] "Generic (PLEG): container finished" podID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerID="ee22bf346a75eff16d58d02dfda6b7d1029fe9577635c3fb7f39876adc12a36d" exitCode=0 Feb 17 16:12:49 crc kubenswrapper[4874]: I0217 16:12:49.550947 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" event={"ID":"da9d156e-7c39-4ea0-80a3-3046c65ec615","Type":"ContainerDied","Data":"ee22bf346a75eff16d58d02dfda6b7d1029fe9577635c3fb7f39876adc12a36d"} Feb 17 16:12:50 crc kubenswrapper[4874]: I0217 16:12:50.562842 4874 generic.go:334] "Generic (PLEG): container finished" podID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerID="395efdea59f4593cb3bdd26cbbbbcfece401ea09481df591366c395bf447a904" exitCode=0 Feb 17 16:12:50 crc kubenswrapper[4874]: I0217 16:12:50.562926 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" event={"ID":"da9d156e-7c39-4ea0-80a3-3046c65ec615","Type":"ContainerDied","Data":"395efdea59f4593cb3bdd26cbbbbcfece401ea09481df591366c395bf447a904"} Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.786116 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.822302 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-65qcw"] Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.822733 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovn-controller" containerID="cri-o://cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0" gracePeriod=30 Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.822849 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131" gracePeriod=30 Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.822921 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="northd" containerID="cri-o://3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f" gracePeriod=30 Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.822911 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="sbdb" containerID="cri-o://47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433" gracePeriod=30 Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.822906 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kube-rbac-proxy-node" containerID="cri-o://126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242" gracePeriod=30 Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.822866 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovn-acl-logging" containerID="cri-o://f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1" gracePeriod=30 Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.823335 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="nbdb" containerID="cri-o://ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07" gracePeriod=30 Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.856052 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" containerID="cri-o://7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b" gracePeriod=30 Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.950204 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-bundle\") pod \"da9d156e-7c39-4ea0-80a3-3046c65ec615\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.950335 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-util\") pod \"da9d156e-7c39-4ea0-80a3-3046c65ec615\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.950424 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvbt8\" (UniqueName: \"kubernetes.io/projected/da9d156e-7c39-4ea0-80a3-3046c65ec615-kube-api-access-gvbt8\") pod \"da9d156e-7c39-4ea0-80a3-3046c65ec615\" (UID: \"da9d156e-7c39-4ea0-80a3-3046c65ec615\") " Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.952251 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-bundle" (OuterVolumeSpecName: "bundle") pod "da9d156e-7c39-4ea0-80a3-3046c65ec615" (UID: "da9d156e-7c39-4ea0-80a3-3046c65ec615"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.958705 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9d156e-7c39-4ea0-80a3-3046c65ec615-kube-api-access-gvbt8" (OuterVolumeSpecName: "kube-api-access-gvbt8") pod "da9d156e-7c39-4ea0-80a3-3046c65ec615" (UID: "da9d156e-7c39-4ea0-80a3-3046c65ec615"). InnerVolumeSpecName "kube-api-access-gvbt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:12:51 crc kubenswrapper[4874]: I0217 16:12:51.962631 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-util" (OuterVolumeSpecName: "util") pod "da9d156e-7c39-4ea0-80a3-3046c65ec615" (UID: "da9d156e-7c39-4ea0-80a3-3046c65ec615"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.052014 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.052052 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvbt8\" (UniqueName: \"kubernetes.io/projected/da9d156e-7c39-4ea0-80a3-3046c65ec615-kube-api-access-gvbt8\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.052063 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da9d156e-7c39-4ea0-80a3-3046c65ec615-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.593692 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovnkube-controller/3.log" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.596795 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovn-acl-logging/0.log" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.597569 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovn-controller/0.log" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598030 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b" exitCode=0 Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598058 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433" exitCode=0 Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598067 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07" exitCode=0 Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598079 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f" exitCode=0 Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598103 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1" exitCode=143 Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598113 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0" exitCode=143 Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598127 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b"} Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598178 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433"} Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598199 4874 scope.go:117] "RemoveContainer" containerID="82e707b09de72a6c64c252760bab0973083f616379f68e889e9c2084a91c83eb" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598206 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07"} Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598305 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f"} Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598322 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1"} Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.598333 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0"} Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.601983 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.602030 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5" event={"ID":"da9d156e-7c39-4ea0-80a3-3046c65ec615","Type":"ContainerDied","Data":"2eeb45397aa1a8519f108fdece182d5c1d0045ee1e4f64c88099060d6c2e2763"} Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.602128 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eeb45397aa1a8519f108fdece182d5c1d0045ee1e4f64c88099060d6c2e2763" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.605192 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/2.log" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.606037 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/1.log" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.606139 4874 generic.go:334] "Generic (PLEG): container finished" podID="8aedd049-0029-44f7-869f-4a3ccdce8413" containerID="c3c53358c743c2d2894de0ef19b65fc8e4216837246a7006087759154d090046" exitCode=2 Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.606177 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2vkxj" event={"ID":"8aedd049-0029-44f7-869f-4a3ccdce8413","Type":"ContainerDied","Data":"c3c53358c743c2d2894de0ef19b65fc8e4216837246a7006087759154d090046"} Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.607150 4874 scope.go:117] "RemoveContainer" containerID="c3c53358c743c2d2894de0ef19b65fc8e4216837246a7006087759154d090046" Feb 17 16:12:52 crc kubenswrapper[4874]: E0217 16:12:52.607790 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2vkxj_openshift-multus(8aedd049-0029-44f7-869f-4a3ccdce8413)\"" pod="openshift-multus/multus-2vkxj" podUID="8aedd049-0029-44f7-869f-4a3ccdce8413" Feb 17 16:12:52 crc kubenswrapper[4874]: I0217 16:12:52.638802 4874 scope.go:117] "RemoveContainer" containerID="00c27238b34dfe8b2f47d4716f4fe8df93d63cc4f915794920d20d1d3cd2c245" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.100830 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovn-acl-logging/0.log" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.101926 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovn-controller/0.log" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.102583 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168024 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-systemd\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168123 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-bin\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168184 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-script-lib\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168226 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/10a4777a-2390-401b-86b0-87d298e9f883-ovn-node-metrics-cert\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168284 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-var-lib-openvswitch\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168329 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-slash\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168358 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-ovn-kubernetes\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168392 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168378 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168435 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-etc-openvswitch\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168434 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-slash" (OuterVolumeSpecName: "host-slash") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168497 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168509 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-node-log\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168538 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168550 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-node-log" (OuterVolumeSpecName: "node-log") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168683 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/10a4777a-2390-401b-86b0-87d298e9f883-kube-api-access-7xrf7\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168746 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-openvswitch\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168827 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-kubelet\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168879 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-ovn\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168750 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168946 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168909 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168987 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-systemd-units\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.168990 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169055 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-var-lib-cni-networks-ovn-kubernetes\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169178 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-config\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169015 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169238 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-netd\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169117 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169296 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-env-overrides\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169345 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169351 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-log-socket\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169409 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-log-socket" (OuterVolumeSpecName: "log-socket") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169429 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-netns\") pod \"10a4777a-2390-401b-86b0-87d298e9f883\" (UID: \"10a4777a-2390-401b-86b0-87d298e9f883\") " Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169605 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169715 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169776 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169948 4874 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169965 4874 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169974 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169983 4874 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169991 4874 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.169999 4874 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170008 4874 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170016 4874 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170024 4874 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170032 4874 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170042 4874 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170051 4874 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170059 4874 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170068 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170094 4874 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170102 4874 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/10a4777a-2390-401b-86b0-87d298e9f883-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.170110 4874 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178103 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vwlcm"] Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178420 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178449 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178466 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovn-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178481 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovn-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178497 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178507 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178528 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178539 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178553 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178563 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178575 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerName="util" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178585 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerName="util" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178604 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kubecfg-setup" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178616 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kubecfg-setup" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178632 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="northd" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178642 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="northd" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178654 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerName="pull" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178663 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerName="pull" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178676 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerName="extract" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178686 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerName="extract" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178701 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovn-acl-logging" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178711 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovn-acl-logging" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178727 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="sbdb" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178737 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="sbdb" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178753 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kube-rbac-proxy-node" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178764 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kube-rbac-proxy-node" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178778 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="nbdb" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178788 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="nbdb" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.178808 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.178818 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179018 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovn-acl-logging" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179039 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179058 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179069 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179105 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="nbdb" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179123 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9d156e-7c39-4ea0-80a3-3046c65ec615" containerName="extract" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179139 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="northd" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179154 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="kube-rbac-proxy-node" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179168 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179180 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="sbdb" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179190 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovn-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179203 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.179382 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179395 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.179581 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a4777a-2390-401b-86b0-87d298e9f883" containerName="ovnkube-controller" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.191045 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a4777a-2390-401b-86b0-87d298e9f883-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.192793 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.193983 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a4777a-2390-401b-86b0-87d298e9f883-kube-api-access-7xrf7" (OuterVolumeSpecName: "kube-api-access-7xrf7") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "kube-api-access-7xrf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.210119 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "10a4777a-2390-401b-86b0-87d298e9f883" (UID: "10a4777a-2390-401b-86b0-87d298e9f883"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.271797 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/248f2524-d072-4a9c-8521-17721d2c02a7-ovnkube-script-lib\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.271854 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/248f2524-d072-4a9c-8521-17721d2c02a7-ovn-node-metrics-cert\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.271882 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.271931 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-etc-openvswitch\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.271956 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-systemd-units\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.271977 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-run-systemd\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272010 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/248f2524-d072-4a9c-8521-17721d2c02a7-ovnkube-config\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272257 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-cni-bin\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272298 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272328 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-log-socket\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272352 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-run-openvswitch\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272398 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-run-ovn\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272421 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-slash\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272680 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdqj\" (UniqueName: \"kubernetes.io/projected/248f2524-d072-4a9c-8521-17721d2c02a7-kube-api-access-6xdqj\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272738 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-kubelet\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272829 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-var-lib-openvswitch\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272871 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-run-netns\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272892 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-cni-netd\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272935 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/248f2524-d072-4a9c-8521-17721d2c02a7-env-overrides\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.272973 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-node-log\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.273046 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/10a4777a-2390-401b-86b0-87d298e9f883-kube-api-access-7xrf7\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.273058 4874 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/10a4777a-2390-401b-86b0-87d298e9f883-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.273069 4874 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/10a4777a-2390-401b-86b0-87d298e9f883-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374490 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374568 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-etc-openvswitch\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374610 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-systemd-units\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374644 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-run-systemd\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374639 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374694 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/248f2524-d072-4a9c-8521-17721d2c02a7-ovnkube-config\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374756 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-etc-openvswitch\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374773 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-run-systemd\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374963 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-cni-bin\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.374924 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-systemd-units\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375039 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-cni-bin\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375111 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375016 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375259 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-log-socket\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375299 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-run-openvswitch\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375349 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-slash\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375369 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-run-ovn\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375426 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-run-openvswitch\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375452 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xdqj\" (UniqueName: \"kubernetes.io/projected/248f2524-d072-4a9c-8521-17721d2c02a7-kube-api-access-6xdqj\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375485 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-run-ovn\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375487 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-slash\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375501 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-kubelet\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375640 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-var-lib-openvswitch\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375708 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-var-lib-openvswitch\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375711 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-run-netns\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375742 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-run-netns\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375760 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-kubelet\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375777 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-cni-netd\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375823 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-host-cni-netd\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375851 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/248f2524-d072-4a9c-8521-17721d2c02a7-env-overrides\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375370 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-log-socket\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375903 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-node-log\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375946 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/248f2524-d072-4a9c-8521-17721d2c02a7-ovnkube-script-lib\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375977 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/248f2524-d072-4a9c-8521-17721d2c02a7-node-log\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.375983 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/248f2524-d072-4a9c-8521-17721d2c02a7-ovn-node-metrics-cert\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.376058 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/248f2524-d072-4a9c-8521-17721d2c02a7-ovnkube-config\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.376772 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/248f2524-d072-4a9c-8521-17721d2c02a7-env-overrides\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.377035 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/248f2524-d072-4a9c-8521-17721d2c02a7-ovnkube-script-lib\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.379818 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/248f2524-d072-4a9c-8521-17721d2c02a7-ovn-node-metrics-cert\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.397325 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xdqj\" (UniqueName: \"kubernetes.io/projected/248f2524-d072-4a9c-8521-17721d2c02a7-kube-api-access-6xdqj\") pod \"ovnkube-node-vwlcm\" (UID: \"248f2524-d072-4a9c-8521-17721d2c02a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.541557 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:12:53 crc kubenswrapper[4874]: W0217 16:12:53.575112 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod248f2524_d072_4a9c_8521_17721d2c02a7.slice/crio-4f85a9f6ab41887467b2a00249f80b998fac2810654ebbc085aee7e6b668c215 WatchSource:0}: Error finding container 4f85a9f6ab41887467b2a00249f80b998fac2810654ebbc085aee7e6b668c215: Status 404 returned error can't find the container with id 4f85a9f6ab41887467b2a00249f80b998fac2810654ebbc085aee7e6b668c215 Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.623781 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovn-acl-logging/0.log" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.624916 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-65qcw_10a4777a-2390-401b-86b0-87d298e9f883/ovn-controller/0.log" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.625718 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131" exitCode=0 Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.625761 4874 generic.go:334] "Generic (PLEG): container finished" podID="10a4777a-2390-401b-86b0-87d298e9f883" containerID="126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242" exitCode=0 Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.625870 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131"} Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.625938 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.625965 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242"} Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.626006 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-65qcw" event={"ID":"10a4777a-2390-401b-86b0-87d298e9f883","Type":"ContainerDied","Data":"e8a5805694369d8a201e12c8c17cc4f11b2d8cbcb971525d54ebf2a7332be74f"} Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.626030 4874 scope.go:117] "RemoveContainer" containerID="7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.630390 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/2.log" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.632850 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerStarted","Data":"4f85a9f6ab41887467b2a00249f80b998fac2810654ebbc085aee7e6b668c215"} Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.659080 4874 scope.go:117] "RemoveContainer" containerID="47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.677426 4874 scope.go:117] "RemoveContainer" containerID="ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.691429 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-65qcw"] Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.703348 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-65qcw"] Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.709592 4874 scope.go:117] "RemoveContainer" containerID="3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.721897 4874 scope.go:117] "RemoveContainer" containerID="4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.760957 4874 scope.go:117] "RemoveContainer" containerID="126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.784160 4874 scope.go:117] "RemoveContainer" containerID="f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.856149 4874 scope.go:117] "RemoveContainer" containerID="cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.875433 4874 scope.go:117] "RemoveContainer" containerID="5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.891472 4874 scope.go:117] "RemoveContainer" containerID="7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.891879 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b\": container with ID starting with 7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b not found: ID does not exist" containerID="7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.891912 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b"} err="failed to get container status \"7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b\": rpc error: code = NotFound desc = could not find container \"7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b\": container with ID starting with 7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.891947 4874 scope.go:117] "RemoveContainer" containerID="47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.892250 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\": container with ID starting with 47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433 not found: ID does not exist" containerID="47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.892272 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433"} err="failed to get container status \"47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\": rpc error: code = NotFound desc = could not find container \"47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\": container with ID starting with 47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.892284 4874 scope.go:117] "RemoveContainer" containerID="ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.892749 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\": container with ID starting with ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07 not found: ID does not exist" containerID="ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.892770 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07"} err="failed to get container status \"ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\": rpc error: code = NotFound desc = could not find container \"ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\": container with ID starting with ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.892781 4874 scope.go:117] "RemoveContainer" containerID="3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.892998 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\": container with ID starting with 3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f not found: ID does not exist" containerID="3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.893046 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f"} err="failed to get container status \"3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\": rpc error: code = NotFound desc = could not find container \"3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\": container with ID starting with 3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.893099 4874 scope.go:117] "RemoveContainer" containerID="4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.893406 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\": container with ID starting with 4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131 not found: ID does not exist" containerID="4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.893451 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131"} err="failed to get container status \"4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\": rpc error: code = NotFound desc = could not find container \"4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\": container with ID starting with 4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.893487 4874 scope.go:117] "RemoveContainer" containerID="126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.893867 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\": container with ID starting with 126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242 not found: ID does not exist" containerID="126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.893893 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242"} err="failed to get container status \"126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\": rpc error: code = NotFound desc = could not find container \"126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\": container with ID starting with 126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.893907 4874 scope.go:117] "RemoveContainer" containerID="f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.894182 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\": container with ID starting with f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1 not found: ID does not exist" containerID="f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.894223 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1"} err="failed to get container status \"f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\": rpc error: code = NotFound desc = could not find container \"f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\": container with ID starting with f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.894250 4874 scope.go:117] "RemoveContainer" containerID="cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.894567 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\": container with ID starting with cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0 not found: ID does not exist" containerID="cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.894595 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0"} err="failed to get container status \"cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\": rpc error: code = NotFound desc = could not find container \"cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\": container with ID starting with cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.894612 4874 scope.go:117] "RemoveContainer" containerID="5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e" Feb 17 16:12:53 crc kubenswrapper[4874]: E0217 16:12:53.894911 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\": container with ID starting with 5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e not found: ID does not exist" containerID="5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.894936 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e"} err="failed to get container status \"5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\": rpc error: code = NotFound desc = could not find container \"5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\": container with ID starting with 5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.894954 4874 scope.go:117] "RemoveContainer" containerID="7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.895300 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b"} err="failed to get container status \"7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b\": rpc error: code = NotFound desc = could not find container \"7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b\": container with ID starting with 7d87d8f21c5db57994f64639bb98ecc9b69b1dd0a306578e07c2628d6c636e3b not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.895331 4874 scope.go:117] "RemoveContainer" containerID="47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.895684 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433"} err="failed to get container status \"47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\": rpc error: code = NotFound desc = could not find container \"47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433\": container with ID starting with 47ff1585af1697702c2ea7cd78735e68623259853dccf8940cfbd3f628c3a433 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.895738 4874 scope.go:117] "RemoveContainer" containerID="ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.896045 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07"} err="failed to get container status \"ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\": rpc error: code = NotFound desc = could not find container \"ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07\": container with ID starting with ce9fc1209198f6b6b7328028fa4841fd16a3bd9cb46589a0ee5764c981673a07 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.896073 4874 scope.go:117] "RemoveContainer" containerID="3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.896380 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f"} err="failed to get container status \"3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\": rpc error: code = NotFound desc = could not find container \"3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f\": container with ID starting with 3bac252843880ff674392583830944670a072eda2998096d0053729a3dc6bd9f not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.896407 4874 scope.go:117] "RemoveContainer" containerID="4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.896640 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131"} err="failed to get container status \"4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\": rpc error: code = NotFound desc = could not find container \"4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131\": container with ID starting with 4baf8b6ccaee847124b5372e40980ebfcbf8ee974448926ea31174254088d131 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.896667 4874 scope.go:117] "RemoveContainer" containerID="126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.897089 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242"} err="failed to get container status \"126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\": rpc error: code = NotFound desc = could not find container \"126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242\": container with ID starting with 126ace5a74683bb4f94e2fd1bc14df8ffdf4de791f2f88bbc277d475aecca242 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.897112 4874 scope.go:117] "RemoveContainer" containerID="f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.897362 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1"} err="failed to get container status \"f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\": rpc error: code = NotFound desc = could not find container \"f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1\": container with ID starting with f65e3e4fae93dfec1824f4502047b4dab21d4105f67afae0c5a39d912d48bad1 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.897390 4874 scope.go:117] "RemoveContainer" containerID="cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.897642 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0"} err="failed to get container status \"cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\": rpc error: code = NotFound desc = could not find container \"cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0\": container with ID starting with cb3ff94db8c49b2e47acef5ca6006987810c5db5217ed4694ede574000e94ea0 not found: ID does not exist" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.897681 4874 scope.go:117] "RemoveContainer" containerID="5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e" Feb 17 16:12:53 crc kubenswrapper[4874]: I0217 16:12:53.898324 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e"} err="failed to get container status \"5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\": rpc error: code = NotFound desc = could not find container \"5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e\": container with ID starting with 5ffcf98e7fd9311657906ee016a26d26e0467ec54aa363b309ef0c113a5edd3e not found: ID does not exist" Feb 17 16:12:54 crc kubenswrapper[4874]: I0217 16:12:54.470903 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a4777a-2390-401b-86b0-87d298e9f883" path="/var/lib/kubelet/pods/10a4777a-2390-401b-86b0-87d298e9f883/volumes" Feb 17 16:12:54 crc kubenswrapper[4874]: I0217 16:12:54.642422 4874 generic.go:334] "Generic (PLEG): container finished" podID="248f2524-d072-4a9c-8521-17721d2c02a7" containerID="84aaa75acc92acfffa04b986662a8fa4c491ee8e548bb7247f2351dfeef04d2c" exitCode=0 Feb 17 16:12:54 crc kubenswrapper[4874]: I0217 16:12:54.642535 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerDied","Data":"84aaa75acc92acfffa04b986662a8fa4c491ee8e548bb7247f2351dfeef04d2c"} Feb 17 16:12:55 crc kubenswrapper[4874]: I0217 16:12:55.656658 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerStarted","Data":"b58df6ece47cb015036c3297c2574019403841d85eb7ebe54815846521c280bd"} Feb 17 16:12:55 crc kubenswrapper[4874]: I0217 16:12:55.656713 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerStarted","Data":"c8d54d1658e711d196c6633f68f1697c126d5cf9b35e86614341b5f0a025cb25"} Feb 17 16:12:55 crc kubenswrapper[4874]: I0217 16:12:55.656727 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerStarted","Data":"00ab9c2cd6b582a90e6b6459c1bba59167593759d53b425fc62e6990bbf3e9ea"} Feb 17 16:12:55 crc kubenswrapper[4874]: I0217 16:12:55.656739 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerStarted","Data":"f22330ce76970fd3ff78f25fed770ba90a03e285c6a0e505ba8fa1b89f7d8d78"} Feb 17 16:12:55 crc kubenswrapper[4874]: I0217 16:12:55.656755 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerStarted","Data":"a1b1906cb14d41464fcdb81779871fe8feb520b4f457537b177b72d4e14c165a"} Feb 17 16:12:55 crc kubenswrapper[4874]: I0217 16:12:55.656766 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerStarted","Data":"9eae2c3d96fc95e2794f9c5e60615d7443e81fb4198033dd20b45331bee74cd5"} Feb 17 16:12:57 crc kubenswrapper[4874]: I0217 16:12:57.725151 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:57 crc kubenswrapper[4874]: I0217 16:12:57.725703 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:12:58 crc kubenswrapper[4874]: I0217 16:12:58.676566 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerStarted","Data":"8ab389f38e038edfad937b06b2eeba8e524cb76dc6585e660036775c1c9721cc"} Feb 17 16:13:00 crc kubenswrapper[4874]: I0217 16:13:00.690339 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" event={"ID":"248f2524-d072-4a9c-8521-17721d2c02a7","Type":"ContainerStarted","Data":"b05a93d8d8f0a1a0bbc0776446148a9bc5261a75f0b1aa10013cddff5a05ebce"} Feb 17 16:13:00 crc kubenswrapper[4874]: I0217 16:13:00.691018 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:13:00 crc kubenswrapper[4874]: I0217 16:13:00.691035 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:13:00 crc kubenswrapper[4874]: I0217 16:13:00.713812 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:13:00 crc kubenswrapper[4874]: I0217 16:13:00.724059 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" podStartSLOduration=7.72403978 podStartE2EDuration="7.72403978s" podCreationTimestamp="2026-02-17 16:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:00.72240496 +0000 UTC m=+591.016793531" watchObservedRunningTime="2026-02-17 16:13:00.72403978 +0000 UTC m=+591.018428351" Feb 17 16:13:01 crc kubenswrapper[4874]: I0217 16:13:01.695909 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:13:01 crc kubenswrapper[4874]: I0217 16:13:01.725761 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.355486 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.357310 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.359791 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.359812 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-stz4v" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.360193 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.402411 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jdc\" (UniqueName: \"kubernetes.io/projected/cf7f0be2-b792-4603-a97c-53a2f335acee-kube-api-access-42jdc\") pod \"obo-prometheus-operator-68bc856cb9-fjkwc\" (UID: \"cf7f0be2-b792-4603-a97c-53a2f335acee\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.410217 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.415289 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.416104 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.417741 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-hjlcf" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.417761 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.423182 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.423964 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.433200 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.448182 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.503743 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a893bee-81e5-480e-8414-43a823e768fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z\" (UID: \"7a893bee-81e5-480e-8414-43a823e768fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.503812 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5178b00a-11f3-48c6-96be-459a7b26be82-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg\" (UID: \"5178b00a-11f3-48c6-96be-459a7b26be82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.503984 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jdc\" (UniqueName: \"kubernetes.io/projected/cf7f0be2-b792-4603-a97c-53a2f335acee-kube-api-access-42jdc\") pod \"obo-prometheus-operator-68bc856cb9-fjkwc\" (UID: \"cf7f0be2-b792-4603-a97c-53a2f335acee\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.504145 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a893bee-81e5-480e-8414-43a823e768fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z\" (UID: \"7a893bee-81e5-480e-8414-43a823e768fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.504292 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5178b00a-11f3-48c6-96be-459a7b26be82-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg\" (UID: \"5178b00a-11f3-48c6-96be-459a7b26be82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.521962 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jdc\" (UniqueName: \"kubernetes.io/projected/cf7f0be2-b792-4603-a97c-53a2f335acee-kube-api-access-42jdc\") pod \"obo-prometheus-operator-68bc856cb9-fjkwc\" (UID: \"cf7f0be2-b792-4603-a97c-53a2f335acee\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.591660 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-2b8tl"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.592584 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.594504 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-4s84d" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.595949 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.605374 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a893bee-81e5-480e-8414-43a823e768fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z\" (UID: \"7a893bee-81e5-480e-8414-43a823e768fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.605783 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/660c5439-82eb-4696-9df3-7968e680b5a9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-2b8tl\" (UID: \"660c5439-82eb-4696-9df3-7968e680b5a9\") " pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.605839 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5178b00a-11f3-48c6-96be-459a7b26be82-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg\" (UID: \"5178b00a-11f3-48c6-96be-459a7b26be82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.605891 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a893bee-81e5-480e-8414-43a823e768fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z\" (UID: \"7a893bee-81e5-480e-8414-43a823e768fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.605948 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5178b00a-11f3-48c6-96be-459a7b26be82-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg\" (UID: \"5178b00a-11f3-48c6-96be-459a7b26be82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.605977 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv6tm\" (UniqueName: \"kubernetes.io/projected/660c5439-82eb-4696-9df3-7968e680b5a9-kube-api-access-gv6tm\") pod \"observability-operator-59bdc8b94-2b8tl\" (UID: \"660c5439-82eb-4696-9df3-7968e680b5a9\") " pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.609781 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5178b00a-11f3-48c6-96be-459a7b26be82-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg\" (UID: \"5178b00a-11f3-48c6-96be-459a7b26be82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.609814 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5178b00a-11f3-48c6-96be-459a7b26be82-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg\" (UID: \"5178b00a-11f3-48c6-96be-459a7b26be82\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.614610 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a893bee-81e5-480e-8414-43a823e768fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z\" (UID: \"7a893bee-81e5-480e-8414-43a823e768fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.614754 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a893bee-81e5-480e-8414-43a823e768fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z\" (UID: \"7a893bee-81e5-480e-8414-43a823e768fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.650092 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-2b8tl"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.674856 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.703542 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(3357f5c09a74207797dc57260bd8418f455f7c0f47e4ef07c4de90792348e842): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.703612 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(3357f5c09a74207797dc57260bd8418f455f7c0f47e4ef07c4de90792348e842): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.703647 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(3357f5c09a74207797dc57260bd8418f455f7c0f47e4ef07c4de90792348e842): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.703698 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators(cf7f0be2-b792-4603-a97c-53a2f335acee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators(cf7f0be2-b792-4603-a97c-53a2f335acee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(3357f5c09a74207797dc57260bd8418f455f7c0f47e4ef07c4de90792348e842): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" podUID="cf7f0be2-b792-4603-a97c-53a2f335acee" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.706852 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/660c5439-82eb-4696-9df3-7968e680b5a9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-2b8tl\" (UID: \"660c5439-82eb-4696-9df3-7968e680b5a9\") " pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.706937 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv6tm\" (UniqueName: \"kubernetes.io/projected/660c5439-82eb-4696-9df3-7968e680b5a9-kube-api-access-gv6tm\") pod \"observability-operator-59bdc8b94-2b8tl\" (UID: \"660c5439-82eb-4696-9df3-7968e680b5a9\") " pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.712716 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/660c5439-82eb-4696-9df3-7968e680b5a9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-2b8tl\" (UID: \"660c5439-82eb-4696-9df3-7968e680b5a9\") " pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.728649 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv6tm\" (UniqueName: \"kubernetes.io/projected/660c5439-82eb-4696-9df3-7968e680b5a9-kube-api-access-gv6tm\") pod \"observability-operator-59bdc8b94-2b8tl\" (UID: \"660c5439-82eb-4696-9df3-7968e680b5a9\") " pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.729465 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.749557 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.783321 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(a4fccd5b1180cef2c1057d09ba94770fc782b1d05d2323203ae6ea0de8144b09): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.783418 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(a4fccd5b1180cef2c1057d09ba94770fc782b1d05d2323203ae6ea0de8144b09): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.783452 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(a4fccd5b1180cef2c1057d09ba94770fc782b1d05d2323203ae6ea0de8144b09): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.783520 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators(5178b00a-11f3-48c6-96be-459a7b26be82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators(5178b00a-11f3-48c6-96be-459a7b26be82)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(a4fccd5b1180cef2c1057d09ba94770fc782b1d05d2323203ae6ea0de8144b09): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" podUID="5178b00a-11f3-48c6-96be-459a7b26be82" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.786156 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(a2821bd2613d9203e4f1f928e68f8cce88f7f4eb306f932cb08a5861908df9e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.786216 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(a2821bd2613d9203e4f1f928e68f8cce88f7f4eb306f932cb08a5861908df9e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.786247 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(a2821bd2613d9203e4f1f928e68f8cce88f7f4eb306f932cb08a5861908df9e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.786302 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators(7a893bee-81e5-480e-8414-43a823e768fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators(7a893bee-81e5-480e-8414-43a823e768fd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(a2821bd2613d9203e4f1f928e68f8cce88f7f4eb306f932cb08a5861908df9e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" podUID="7a893bee-81e5-480e-8414-43a823e768fd" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.797369 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-b988z"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.798428 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.800811 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-dmrk4" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.808196 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5cwv\" (UniqueName: \"kubernetes.io/projected/a3d284b8-a322-4ce7-9a33-c82f3adafeb1-kube-api-access-d5cwv\") pod \"perses-operator-5bf474d74f-b988z\" (UID: \"a3d284b8-a322-4ce7-9a33-c82f3adafeb1\") " pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.808302 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3d284b8-a322-4ce7-9a33-c82f3adafeb1-openshift-service-ca\") pod \"perses-operator-5bf474d74f-b988z\" (UID: \"a3d284b8-a322-4ce7-9a33-c82f3adafeb1\") " pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.818568 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-b988z"] Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.908159 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.909479 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3d284b8-a322-4ce7-9a33-c82f3adafeb1-openshift-service-ca\") pod \"perses-operator-5bf474d74f-b988z\" (UID: \"a3d284b8-a322-4ce7-9a33-c82f3adafeb1\") " pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.909589 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5cwv\" (UniqueName: \"kubernetes.io/projected/a3d284b8-a322-4ce7-9a33-c82f3adafeb1-kube-api-access-d5cwv\") pod \"perses-operator-5bf474d74f-b988z\" (UID: \"a3d284b8-a322-4ce7-9a33-c82f3adafeb1\") " pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.910316 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a3d284b8-a322-4ce7-9a33-c82f3adafeb1-openshift-service-ca\") pod \"perses-operator-5bf474d74f-b988z\" (UID: \"a3d284b8-a322-4ce7-9a33-c82f3adafeb1\") " pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:03 crc kubenswrapper[4874]: I0217 16:13:03.924896 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5cwv\" (UniqueName: \"kubernetes.io/projected/a3d284b8-a322-4ce7-9a33-c82f3adafeb1-kube-api-access-d5cwv\") pod \"perses-operator-5bf474d74f-b988z\" (UID: \"a3d284b8-a322-4ce7-9a33-c82f3adafeb1\") " pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.930946 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(603b8ee98e415792a3e37755328bbd0a0c18e28c99f878fe76d5f8334003b89a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.931030 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(603b8ee98e415792a3e37755328bbd0a0c18e28c99f878fe76d5f8334003b89a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.931072 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(603b8ee98e415792a3e37755328bbd0a0c18e28c99f878fe76d5f8334003b89a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:03 crc kubenswrapper[4874]: E0217 16:13:03.931170 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-2b8tl_openshift-operators(660c5439-82eb-4696-9df3-7968e680b5a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-2b8tl_openshift-operators(660c5439-82eb-4696-9df3-7968e680b5a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(603b8ee98e415792a3e37755328bbd0a0c18e28c99f878fe76d5f8334003b89a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" podUID="660c5439-82eb-4696-9df3-7968e680b5a9" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.113145 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.138849 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(215fd8a94653654d011734c7b2dadf9685ec840e98e09b2af3a3f6e509f09c67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.138955 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(215fd8a94653654d011734c7b2dadf9685ec840e98e09b2af3a3f6e509f09c67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.138993 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(215fd8a94653654d011734c7b2dadf9685ec840e98e09b2af3a3f6e509f09c67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.139068 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-b988z_openshift-operators(a3d284b8-a322-4ce7-9a33-c82f3adafeb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-b988z_openshift-operators(a3d284b8-a322-4ce7-9a33-c82f3adafeb1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(215fd8a94653654d011734c7b2dadf9685ec840e98e09b2af3a3f6e509f09c67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-b988z" podUID="a3d284b8-a322-4ce7-9a33-c82f3adafeb1" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.456869 4874 scope.go:117] "RemoveContainer" containerID="c3c53358c743c2d2894de0ef19b65fc8e4216837246a7006087759154d090046" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.457152 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2vkxj_openshift-multus(8aedd049-0029-44f7-869f-4a3ccdce8413)\"" pod="openshift-multus/multus-2vkxj" podUID="8aedd049-0029-44f7-869f-4a3ccdce8413" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.714333 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.714384 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.714395 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.714508 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.714611 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.715591 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.716193 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.716303 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.716585 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:04 crc kubenswrapper[4874]: I0217 16:13:04.716603 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.801530 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(5cfa52044cf776a3139460a2580e4febf21eb7617ddb67bcf3202ac3e42f3890): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.801910 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(5cfa52044cf776a3139460a2580e4febf21eb7617ddb67bcf3202ac3e42f3890): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.801940 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(5cfa52044cf776a3139460a2580e4febf21eb7617ddb67bcf3202ac3e42f3890): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.802007 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators(7a893bee-81e5-480e-8414-43a823e768fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators(7a893bee-81e5-480e-8414-43a823e768fd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(5cfa52044cf776a3139460a2580e4febf21eb7617ddb67bcf3202ac3e42f3890): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" podUID="7a893bee-81e5-480e-8414-43a823e768fd" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.824315 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(2f2d02b4a52713febad06a0c7d160444cfa90fe8bdbd097f06a975d0df00a863): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.824390 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(2f2d02b4a52713febad06a0c7d160444cfa90fe8bdbd097f06a975d0df00a863): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.824422 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(2f2d02b4a52713febad06a0c7d160444cfa90fe8bdbd097f06a975d0df00a863): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.824469 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators(cf7f0be2-b792-4603-a97c-53a2f335acee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators(cf7f0be2-b792-4603-a97c-53a2f335acee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(2f2d02b4a52713febad06a0c7d160444cfa90fe8bdbd097f06a975d0df00a863): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" podUID="cf7f0be2-b792-4603-a97c-53a2f335acee" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.838233 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(8ffd8184211a1d536ea8d1bf1e8fb9c1836c0f2f40fd7e7b23c60a664268807a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.838294 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(8ffd8184211a1d536ea8d1bf1e8fb9c1836c0f2f40fd7e7b23c60a664268807a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.838326 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(8ffd8184211a1d536ea8d1bf1e8fb9c1836c0f2f40fd7e7b23c60a664268807a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.838383 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators(5178b00a-11f3-48c6-96be-459a7b26be82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators(5178b00a-11f3-48c6-96be-459a7b26be82)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(8ffd8184211a1d536ea8d1bf1e8fb9c1836c0f2f40fd7e7b23c60a664268807a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" podUID="5178b00a-11f3-48c6-96be-459a7b26be82" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.849159 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(03af46d7a97c8c2c5ea81d4609aca3ce453f1156e02bb43445997728aa087ee0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.849234 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(03af46d7a97c8c2c5ea81d4609aca3ce453f1156e02bb43445997728aa087ee0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.849258 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(03af46d7a97c8c2c5ea81d4609aca3ce453f1156e02bb43445997728aa087ee0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.849300 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-2b8tl_openshift-operators(660c5439-82eb-4696-9df3-7968e680b5a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-2b8tl_openshift-operators(660c5439-82eb-4696-9df3-7968e680b5a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(03af46d7a97c8c2c5ea81d4609aca3ce453f1156e02bb43445997728aa087ee0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" podUID="660c5439-82eb-4696-9df3-7968e680b5a9" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.853133 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(8c9c1e619e3a7477955d1bf090e826d61c4916f3091b7f2a975f4b8d45f2ffdb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.853169 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(8c9c1e619e3a7477955d1bf090e826d61c4916f3091b7f2a975f4b8d45f2ffdb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.853187 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(8c9c1e619e3a7477955d1bf090e826d61c4916f3091b7f2a975f4b8d45f2ffdb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:04 crc kubenswrapper[4874]: E0217 16:13:04.853216 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-b988z_openshift-operators(a3d284b8-a322-4ce7-9a33-c82f3adafeb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-b988z_openshift-operators(a3d284b8-a322-4ce7-9a33-c82f3adafeb1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(8c9c1e619e3a7477955d1bf090e826d61c4916f3091b7f2a975f4b8d45f2ffdb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-b988z" podUID="a3d284b8-a322-4ce7-9a33-c82f3adafeb1" Feb 17 16:13:15 crc kubenswrapper[4874]: I0217 16:13:15.456517 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:15 crc kubenswrapper[4874]: I0217 16:13:15.457566 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:15 crc kubenswrapper[4874]: E0217 16:13:15.480391 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(e1c2bb86024d605c2a5aa38fa3959f0d900c7719304288236508c848f41b2741): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:15 crc kubenswrapper[4874]: E0217 16:13:15.480469 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(e1c2bb86024d605c2a5aa38fa3959f0d900c7719304288236508c848f41b2741): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:15 crc kubenswrapper[4874]: E0217 16:13:15.480497 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(e1c2bb86024d605c2a5aa38fa3959f0d900c7719304288236508c848f41b2741): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:15 crc kubenswrapper[4874]: E0217 16:13:15.480552 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-b988z_openshift-operators(a3d284b8-a322-4ce7-9a33-c82f3adafeb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-b988z_openshift-operators(a3d284b8-a322-4ce7-9a33-c82f3adafeb1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-b988z_openshift-operators_a3d284b8-a322-4ce7-9a33-c82f3adafeb1_0(e1c2bb86024d605c2a5aa38fa3959f0d900c7719304288236508c848f41b2741): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-b988z" podUID="a3d284b8-a322-4ce7-9a33-c82f3adafeb1" Feb 17 16:13:17 crc kubenswrapper[4874]: I0217 16:13:17.456712 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:17 crc kubenswrapper[4874]: I0217 16:13:17.457631 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:17 crc kubenswrapper[4874]: I0217 16:13:17.456814 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:17 crc kubenswrapper[4874]: I0217 16:13:17.458457 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:17 crc kubenswrapper[4874]: E0217 16:13:17.512790 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(07e7b281786f52c49ad26c17e3d95ed44b430c90ccc46d82d55c1b1539ed9104): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:17 crc kubenswrapper[4874]: E0217 16:13:17.512874 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(07e7b281786f52c49ad26c17e3d95ed44b430c90ccc46d82d55c1b1539ed9104): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:17 crc kubenswrapper[4874]: E0217 16:13:17.512904 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(07e7b281786f52c49ad26c17e3d95ed44b430c90ccc46d82d55c1b1539ed9104): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:17 crc kubenswrapper[4874]: E0217 16:13:17.512965 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators(5178b00a-11f3-48c6-96be-459a7b26be82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators(5178b00a-11f3-48c6-96be-459a7b26be82)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_openshift-operators_5178b00a-11f3-48c6-96be-459a7b26be82_0(07e7b281786f52c49ad26c17e3d95ed44b430c90ccc46d82d55c1b1539ed9104): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" podUID="5178b00a-11f3-48c6-96be-459a7b26be82" Feb 17 16:13:17 crc kubenswrapper[4874]: E0217 16:13:17.520874 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(4debb616a369a86d822b9706dee9bea8a508922283782f14423325b431207a5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:17 crc kubenswrapper[4874]: E0217 16:13:17.521005 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(4debb616a369a86d822b9706dee9bea8a508922283782f14423325b431207a5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:17 crc kubenswrapper[4874]: E0217 16:13:17.521113 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(4debb616a369a86d822b9706dee9bea8a508922283782f14423325b431207a5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:17 crc kubenswrapper[4874]: E0217 16:13:17.521226 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators(cf7f0be2-b792-4603-a97c-53a2f335acee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators(cf7f0be2-b792-4603-a97c-53a2f335acee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fjkwc_openshift-operators_cf7f0be2-b792-4603-a97c-53a2f335acee_0(4debb616a369a86d822b9706dee9bea8a508922283782f14423325b431207a5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" podUID="cf7f0be2-b792-4603-a97c-53a2f335acee" Feb 17 16:13:18 crc kubenswrapper[4874]: I0217 16:13:18.457158 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:18 crc kubenswrapper[4874]: I0217 16:13:18.457317 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:18 crc kubenswrapper[4874]: I0217 16:13:18.457982 4874 scope.go:117] "RemoveContainer" containerID="c3c53358c743c2d2894de0ef19b65fc8e4216837246a7006087759154d090046" Feb 17 16:13:18 crc kubenswrapper[4874]: I0217 16:13:18.458146 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:18 crc kubenswrapper[4874]: I0217 16:13:18.458171 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:18 crc kubenswrapper[4874]: E0217 16:13:18.544749 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(293873f85e255a8646dd60aa80b65a659efecb7ed85fb14f390d53f8a96ba530): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:18 crc kubenswrapper[4874]: E0217 16:13:18.544831 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(293873f85e255a8646dd60aa80b65a659efecb7ed85fb14f390d53f8a96ba530): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:18 crc kubenswrapper[4874]: E0217 16:13:18.544867 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(293873f85e255a8646dd60aa80b65a659efecb7ed85fb14f390d53f8a96ba530): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:18 crc kubenswrapper[4874]: E0217 16:13:18.544935 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-2b8tl_openshift-operators(660c5439-82eb-4696-9df3-7968e680b5a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-2b8tl_openshift-operators(660c5439-82eb-4696-9df3-7968e680b5a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-2b8tl_openshift-operators_660c5439-82eb-4696-9df3-7968e680b5a9_0(293873f85e255a8646dd60aa80b65a659efecb7ed85fb14f390d53f8a96ba530): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" podUID="660c5439-82eb-4696-9df3-7968e680b5a9" Feb 17 16:13:18 crc kubenswrapper[4874]: E0217 16:13:18.574400 4874 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(aea0f37538a2a0a61c660c6162b0ba35f8b90146489ffddf7f40fd2c3fe6b089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:13:18 crc kubenswrapper[4874]: E0217 16:13:18.574455 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(aea0f37538a2a0a61c660c6162b0ba35f8b90146489ffddf7f40fd2c3fe6b089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:18 crc kubenswrapper[4874]: E0217 16:13:18.574474 4874 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(aea0f37538a2a0a61c660c6162b0ba35f8b90146489ffddf7f40fd2c3fe6b089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:18 crc kubenswrapper[4874]: E0217 16:13:18.574511 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators(7a893bee-81e5-480e-8414-43a823e768fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators(7a893bee-81e5-480e-8414-43a823e768fd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_openshift-operators_7a893bee-81e5-480e-8414-43a823e768fd_0(aea0f37538a2a0a61c660c6162b0ba35f8b90146489ffddf7f40fd2c3fe6b089): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" podUID="7a893bee-81e5-480e-8414-43a823e768fd" Feb 17 16:13:19 crc kubenswrapper[4874]: I0217 16:13:19.793906 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2vkxj_8aedd049-0029-44f7-869f-4a3ccdce8413/kube-multus/2.log" Feb 17 16:13:19 crc kubenswrapper[4874]: I0217 16:13:19.794310 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2vkxj" event={"ID":"8aedd049-0029-44f7-869f-4a3ccdce8413","Type":"ContainerStarted","Data":"628d6f9c9e0fdcaaf7ed47f5e938558840d56b4fddf2d15ff359644bae1ecf23"} Feb 17 16:13:23 crc kubenswrapper[4874]: I0217 16:13:23.573764 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" Feb 17 16:13:27 crc kubenswrapper[4874]: I0217 16:13:27.456172 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:27 crc kubenswrapper[4874]: I0217 16:13:27.457357 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:27 crc kubenswrapper[4874]: I0217 16:13:27.731330 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:13:27 crc kubenswrapper[4874]: I0217 16:13:27.731803 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:13:27 crc kubenswrapper[4874]: I0217 16:13:27.731854 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:13:27 crc kubenswrapper[4874]: I0217 16:13:27.732376 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c051c0004b244e4c0ce127d058c5599bf72e06e8786ebe01293d1051eeff494"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:13:27 crc kubenswrapper[4874]: I0217 16:13:27.732438 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://5c051c0004b244e4c0ce127d058c5599bf72e06e8786ebe01293d1051eeff494" gracePeriod=600 Feb 17 16:13:28 crc kubenswrapper[4874]: I0217 16:13:28.056669 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-b988z"] Feb 17 16:13:28 crc kubenswrapper[4874]: W0217 16:13:28.063475 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3d284b8_a322_4ce7_9a33_c82f3adafeb1.slice/crio-53d1055d7e0d40c545e4993d22660884d942a0e0daa5c7dbf5936d84b5588364 WatchSource:0}: Error finding container 53d1055d7e0d40c545e4993d22660884d942a0e0daa5c7dbf5936d84b5588364: Status 404 returned error can't find the container with id 53d1055d7e0d40c545e4993d22660884d942a0e0daa5c7dbf5936d84b5588364 Feb 17 16:13:28 crc kubenswrapper[4874]: I0217 16:13:28.857463 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="5c051c0004b244e4c0ce127d058c5599bf72e06e8786ebe01293d1051eeff494" exitCode=0 Feb 17 16:13:28 crc kubenswrapper[4874]: I0217 16:13:28.857612 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"5c051c0004b244e4c0ce127d058c5599bf72e06e8786ebe01293d1051eeff494"} Feb 17 16:13:28 crc kubenswrapper[4874]: I0217 16:13:28.857694 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"5054786a168a12e52b8d968ed3ece839ff7c3185d6be0ffc79a31e785b1ebbdf"} Feb 17 16:13:28 crc kubenswrapper[4874]: I0217 16:13:28.857724 4874 scope.go:117] "RemoveContainer" containerID="bcc2c8959b8db77cff7d7da089aecb1677ca5a83553a4d055beb1eed3bde2fdd" Feb 17 16:13:28 crc kubenswrapper[4874]: I0217 16:13:28.860440 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-b988z" event={"ID":"a3d284b8-a322-4ce7-9a33-c82f3adafeb1","Type":"ContainerStarted","Data":"53d1055d7e0d40c545e4993d22660884d942a0e0daa5c7dbf5936d84b5588364"} Feb 17 16:13:31 crc kubenswrapper[4874]: I0217 16:13:31.456610 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:31 crc kubenswrapper[4874]: I0217 16:13:31.457908 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" Feb 17 16:13:32 crc kubenswrapper[4874]: I0217 16:13:32.457334 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:32 crc kubenswrapper[4874]: I0217 16:13:32.458127 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" Feb 17 16:13:32 crc kubenswrapper[4874]: I0217 16:13:32.458715 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:32 crc kubenswrapper[4874]: I0217 16:13:32.458982 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.242337 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc"] Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.456297 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.456818 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.501266 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z"] Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.507052 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg"] Feb 17 16:13:33 crc kubenswrapper[4874]: W0217 16:13:33.520783 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a893bee_81e5_480e_8414_43a823e768fd.slice/crio-597c057a0996d0e2bd17879656f22fb3dd6d69d6f60ee56c1643cd3c75a3a80f WatchSource:0}: Error finding container 597c057a0996d0e2bd17879656f22fb3dd6d69d6f60ee56c1643cd3c75a3a80f: Status 404 returned error can't find the container with id 597c057a0996d0e2bd17879656f22fb3dd6d69d6f60ee56c1643cd3c75a3a80f Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.648948 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-2b8tl"] Feb 17 16:13:33 crc kubenswrapper[4874]: W0217 16:13:33.657912 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod660c5439_82eb_4696_9df3_7968e680b5a9.slice/crio-0992b3e27c543e1f264808a919992e62ab38151bc4650bc36cbd2679b6778389 WatchSource:0}: Error finding container 0992b3e27c543e1f264808a919992e62ab38151bc4650bc36cbd2679b6778389: Status 404 returned error can't find the container with id 0992b3e27c543e1f264808a919992e62ab38151bc4650bc36cbd2679b6778389 Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.895518 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" event={"ID":"cf7f0be2-b792-4603-a97c-53a2f335acee","Type":"ContainerStarted","Data":"b1645e0f7d666354cf927f267a1be589a9334b6cb2cd7c738b0e715da76d5ec0"} Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.896625 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" event={"ID":"5178b00a-11f3-48c6-96be-459a7b26be82","Type":"ContainerStarted","Data":"6e030ffe9dcb8939cfbafa5121885e271f1f6be63a65b3d8abf10db3c8094c6e"} Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.897774 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" event={"ID":"660c5439-82eb-4696-9df3-7968e680b5a9","Type":"ContainerStarted","Data":"0992b3e27c543e1f264808a919992e62ab38151bc4650bc36cbd2679b6778389"} Feb 17 16:13:33 crc kubenswrapper[4874]: I0217 16:13:33.898815 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" event={"ID":"7a893bee-81e5-480e-8414-43a823e768fd","Type":"ContainerStarted","Data":"597c057a0996d0e2bd17879656f22fb3dd6d69d6f60ee56c1643cd3c75a3a80f"} Feb 17 16:13:34 crc kubenswrapper[4874]: I0217 16:13:34.907494 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-b988z" event={"ID":"a3d284b8-a322-4ce7-9a33-c82f3adafeb1","Type":"ContainerStarted","Data":"3293ea39c7637d6e9d2743ae6cab6554c886fd6e5403689f1968da3813127826"} Feb 17 16:13:34 crc kubenswrapper[4874]: I0217 16:13:34.908123 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:34 crc kubenswrapper[4874]: I0217 16:13:34.931069 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-b988z" podStartSLOduration=25.307579334 podStartE2EDuration="31.931050967s" podCreationTimestamp="2026-02-17 16:13:03 +0000 UTC" firstStartedPulling="2026-02-17 16:13:28.066425764 +0000 UTC m=+618.360814335" lastFinishedPulling="2026-02-17 16:13:34.689897377 +0000 UTC m=+624.984285968" observedRunningTime="2026-02-17 16:13:34.929740045 +0000 UTC m=+625.224128616" watchObservedRunningTime="2026-02-17 16:13:34.931050967 +0000 UTC m=+625.225439528" Feb 17 16:13:39 crc kubenswrapper[4874]: I0217 16:13:39.979447 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" event={"ID":"cf7f0be2-b792-4603-a97c-53a2f335acee","Type":"ContainerStarted","Data":"3eec73cb11be4efad69329cff9253aa56c1bd26478f0a6fe85990c879ea5ff44"} Feb 17 16:13:39 crc kubenswrapper[4874]: I0217 16:13:39.982221 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" event={"ID":"5178b00a-11f3-48c6-96be-459a7b26be82","Type":"ContainerStarted","Data":"3a3ea96f5804bb340e0d5410544b2096b97e72d64fbe53c0973a4b692da22760"} Feb 17 16:13:39 crc kubenswrapper[4874]: I0217 16:13:39.983809 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" event={"ID":"660c5439-82eb-4696-9df3-7968e680b5a9","Type":"ContainerStarted","Data":"fd53cd6f861c14cc0db7c4ad972d2ef6fe48046af1eca9da3796855400c2776d"} Feb 17 16:13:39 crc kubenswrapper[4874]: I0217 16:13:39.984065 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:39 crc kubenswrapper[4874]: I0217 16:13:39.985351 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" event={"ID":"7a893bee-81e5-480e-8414-43a823e768fd","Type":"ContainerStarted","Data":"c9e3859161d01d28c875a8879d5fd65bd38dc42c9dcf5621ebb2eb6f85403ab1"} Feb 17 16:13:39 crc kubenswrapper[4874]: I0217 16:13:39.985707 4874 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-2b8tl container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.20:8081/healthz\": dial tcp 10.217.0.20:8081: connect: connection refused" start-of-body= Feb 17 16:13:39 crc kubenswrapper[4874]: I0217 16:13:39.985789 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" podUID="660c5439-82eb-4696-9df3-7968e680b5a9" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.20:8081/healthz\": dial tcp 10.217.0.20:8081: connect: connection refused" Feb 17 16:13:40 crc kubenswrapper[4874]: I0217 16:13:40.013469 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fjkwc" podStartSLOduration=30.904798344 podStartE2EDuration="37.013431908s" podCreationTimestamp="2026-02-17 16:13:03 +0000 UTC" firstStartedPulling="2026-02-17 16:13:33.292617817 +0000 UTC m=+623.587006378" lastFinishedPulling="2026-02-17 16:13:39.401251341 +0000 UTC m=+629.695639942" observedRunningTime="2026-02-17 16:13:40.002330972 +0000 UTC m=+630.296719543" watchObservedRunningTime="2026-02-17 16:13:40.013431908 +0000 UTC m=+630.307820519" Feb 17 16:13:40 crc kubenswrapper[4874]: I0217 16:13:40.111751 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg" podStartSLOduration=31.283572533 podStartE2EDuration="37.11173405s" podCreationTimestamp="2026-02-17 16:13:03 +0000 UTC" firstStartedPulling="2026-02-17 16:13:33.538407022 +0000 UTC m=+623.832795583" lastFinishedPulling="2026-02-17 16:13:39.366568509 +0000 UTC m=+629.660957100" observedRunningTime="2026-02-17 16:13:40.082016652 +0000 UTC m=+630.376405223" watchObservedRunningTime="2026-02-17 16:13:40.11173405 +0000 UTC m=+630.406122621" Feb 17 16:13:40 crc kubenswrapper[4874]: I0217 16:13:40.115105 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-658d76db8d-jld5z" podStartSLOduration=31.275318998 podStartE2EDuration="37.115067473s" podCreationTimestamp="2026-02-17 16:13:03 +0000 UTC" firstStartedPulling="2026-02-17 16:13:33.526486226 +0000 UTC m=+623.820874797" lastFinishedPulling="2026-02-17 16:13:39.366234671 +0000 UTC m=+629.660623272" observedRunningTime="2026-02-17 16:13:40.110833277 +0000 UTC m=+630.405221838" watchObservedRunningTime="2026-02-17 16:13:40.115067473 +0000 UTC m=+630.409456044" Feb 17 16:13:40 crc kubenswrapper[4874]: I0217 16:13:40.133040 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" podStartSLOduration=31.379583369 podStartE2EDuration="37.133023959s" podCreationTimestamp="2026-02-17 16:13:03 +0000 UTC" firstStartedPulling="2026-02-17 16:13:33.661853249 +0000 UTC m=+623.956241820" lastFinishedPulling="2026-02-17 16:13:39.415293819 +0000 UTC m=+629.709682410" observedRunningTime="2026-02-17 16:13:40.131294926 +0000 UTC m=+630.425683497" watchObservedRunningTime="2026-02-17 16:13:40.133023959 +0000 UTC m=+630.427412530" Feb 17 16:13:40 crc kubenswrapper[4874]: I0217 16:13:40.992976 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-2b8tl" Feb 17 16:13:44 crc kubenswrapper[4874]: I0217 16:13:44.117219 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-b988z" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.684054 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b2f79"] Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.690470 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-6cb67"] Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.691141 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6cb67" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.691626 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b2f79" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.700586 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.700697 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.700800 4874 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zbl2q" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.701021 4874 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jk24s" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.714413 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59xm\" (UniqueName: \"kubernetes.io/projected/94ae53c8-1b30-492d-945b-e194492623fd-kube-api-access-j59xm\") pod \"cert-manager-858654f9db-6cb67\" (UID: \"94ae53c8-1b30-492d-945b-e194492623fd\") " pod="cert-manager/cert-manager-858654f9db-6cb67" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.714687 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zc6g\" (UniqueName: \"kubernetes.io/projected/0843178e-0046-48d7-9f4b-44ac0deb0f89-kube-api-access-6zc6g\") pod \"cert-manager-cainjector-cf98fcc89-b2f79\" (UID: \"0843178e-0046-48d7-9f4b-44ac0deb0f89\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b2f79" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.716205 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b2f79"] Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.726386 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6cb67"] Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.730823 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dzcwt"] Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.731660 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.733201 4874 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-f88tx" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.748686 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dzcwt"] Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.815705 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59xm\" (UniqueName: \"kubernetes.io/projected/94ae53c8-1b30-492d-945b-e194492623fd-kube-api-access-j59xm\") pod \"cert-manager-858654f9db-6cb67\" (UID: \"94ae53c8-1b30-492d-945b-e194492623fd\") " pod="cert-manager/cert-manager-858654f9db-6cb67" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.816322 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zc6g\" (UniqueName: \"kubernetes.io/projected/0843178e-0046-48d7-9f4b-44ac0deb0f89-kube-api-access-6zc6g\") pod \"cert-manager-cainjector-cf98fcc89-b2f79\" (UID: \"0843178e-0046-48d7-9f4b-44ac0deb0f89\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b2f79" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.834332 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zc6g\" (UniqueName: \"kubernetes.io/projected/0843178e-0046-48d7-9f4b-44ac0deb0f89-kube-api-access-6zc6g\") pod \"cert-manager-cainjector-cf98fcc89-b2f79\" (UID: \"0843178e-0046-48d7-9f4b-44ac0deb0f89\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b2f79" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.834339 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59xm\" (UniqueName: \"kubernetes.io/projected/94ae53c8-1b30-492d-945b-e194492623fd-kube-api-access-j59xm\") pod \"cert-manager-858654f9db-6cb67\" (UID: \"94ae53c8-1b30-492d-945b-e194492623fd\") " pod="cert-manager/cert-manager-858654f9db-6cb67" Feb 17 16:13:49 crc kubenswrapper[4874]: I0217 16:13:49.918173 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xlp\" (UniqueName: \"kubernetes.io/projected/ff3840bb-f767-4d1f-ae3f-7e39a0c94ef3-kube-api-access-49xlp\") pod \"cert-manager-webhook-687f57d79b-dzcwt\" (UID: \"ff3840bb-f767-4d1f-ae3f-7e39a0c94ef3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" Feb 17 16:13:50 crc kubenswrapper[4874]: I0217 16:13:50.017526 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6cb67" Feb 17 16:13:50 crc kubenswrapper[4874]: I0217 16:13:50.019134 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xlp\" (UniqueName: \"kubernetes.io/projected/ff3840bb-f767-4d1f-ae3f-7e39a0c94ef3-kube-api-access-49xlp\") pod \"cert-manager-webhook-687f57d79b-dzcwt\" (UID: \"ff3840bb-f767-4d1f-ae3f-7e39a0c94ef3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" Feb 17 16:13:50 crc kubenswrapper[4874]: I0217 16:13:50.025428 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b2f79" Feb 17 16:13:50 crc kubenswrapper[4874]: I0217 16:13:50.046039 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xlp\" (UniqueName: \"kubernetes.io/projected/ff3840bb-f767-4d1f-ae3f-7e39a0c94ef3-kube-api-access-49xlp\") pod \"cert-manager-webhook-687f57d79b-dzcwt\" (UID: \"ff3840bb-f767-4d1f-ae3f-7e39a0c94ef3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" Feb 17 16:13:50 crc kubenswrapper[4874]: I0217 16:13:50.293509 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6cb67"] Feb 17 16:13:50 crc kubenswrapper[4874]: I0217 16:13:50.344716 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" Feb 17 16:13:50 crc kubenswrapper[4874]: I0217 16:13:50.360210 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b2f79"] Feb 17 16:13:50 crc kubenswrapper[4874]: W0217 16:13:50.375607 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0843178e_0046_48d7_9f4b_44ac0deb0f89.slice/crio-dd9a78fe30fe180647adb9b5e72c53ad18c127db765773c221574e412ca57532 WatchSource:0}: Error finding container dd9a78fe30fe180647adb9b5e72c53ad18c127db765773c221574e412ca57532: Status 404 returned error can't find the container with id dd9a78fe30fe180647adb9b5e72c53ad18c127db765773c221574e412ca57532 Feb 17 16:13:50 crc kubenswrapper[4874]: I0217 16:13:50.561593 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dzcwt"] Feb 17 16:13:50 crc kubenswrapper[4874]: W0217 16:13:50.564397 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff3840bb_f767_4d1f_ae3f_7e39a0c94ef3.slice/crio-fe8b57e5a18e7016bd2228f729cedebf81892f2856d55baff4becfd344cf5fef WatchSource:0}: Error finding container fe8b57e5a18e7016bd2228f729cedebf81892f2856d55baff4becfd344cf5fef: Status 404 returned error can't find the container with id fe8b57e5a18e7016bd2228f729cedebf81892f2856d55baff4becfd344cf5fef Feb 17 16:13:51 crc kubenswrapper[4874]: I0217 16:13:51.068152 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6cb67" event={"ID":"94ae53c8-1b30-492d-945b-e194492623fd","Type":"ContainerStarted","Data":"dd90c1c9f569f627673dbdfd22cbd4d902fe1c4002c1a115bba603e09ace1121"} Feb 17 16:13:51 crc kubenswrapper[4874]: I0217 16:13:51.069520 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" event={"ID":"ff3840bb-f767-4d1f-ae3f-7e39a0c94ef3","Type":"ContainerStarted","Data":"fe8b57e5a18e7016bd2228f729cedebf81892f2856d55baff4becfd344cf5fef"} Feb 17 16:13:51 crc kubenswrapper[4874]: I0217 16:13:51.070422 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b2f79" event={"ID":"0843178e-0046-48d7-9f4b-44ac0deb0f89","Type":"ContainerStarted","Data":"dd9a78fe30fe180647adb9b5e72c53ad18c127db765773c221574e412ca57532"} Feb 17 16:13:54 crc kubenswrapper[4874]: I0217 16:13:54.096992 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b2f79" event={"ID":"0843178e-0046-48d7-9f4b-44ac0deb0f89","Type":"ContainerStarted","Data":"5e4a0c63b7f543a1efd6ac7880b73c6f7457cc9cdabfa1bf610652e490386820"} Feb 17 16:13:54 crc kubenswrapper[4874]: I0217 16:13:54.098351 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6cb67" event={"ID":"94ae53c8-1b30-492d-945b-e194492623fd","Type":"ContainerStarted","Data":"9e405a808d5288ae7d6617eb7bc541b68b6f46a94fc63e61448fec0d5e834426"} Feb 17 16:13:54 crc kubenswrapper[4874]: I0217 16:13:54.112694 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b2f79" podStartSLOduration=2.005540659 podStartE2EDuration="5.112674753s" podCreationTimestamp="2026-02-17 16:13:49 +0000 UTC" firstStartedPulling="2026-02-17 16:13:50.378242217 +0000 UTC m=+640.672630778" lastFinishedPulling="2026-02-17 16:13:53.485376301 +0000 UTC m=+643.779764872" observedRunningTime="2026-02-17 16:13:54.110691284 +0000 UTC m=+644.405079845" watchObservedRunningTime="2026-02-17 16:13:54.112674753 +0000 UTC m=+644.407063324" Feb 17 16:13:54 crc kubenswrapper[4874]: I0217 16:13:54.139439 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-6cb67" podStartSLOduration=1.9474065760000001 podStartE2EDuration="5.139422578s" podCreationTimestamp="2026-02-17 16:13:49 +0000 UTC" firstStartedPulling="2026-02-17 16:13:50.298665341 +0000 UTC m=+640.593053912" lastFinishedPulling="2026-02-17 16:13:53.490681343 +0000 UTC m=+643.785069914" observedRunningTime="2026-02-17 16:13:54.13830799 +0000 UTC m=+644.432696561" watchObservedRunningTime="2026-02-17 16:13:54.139422578 +0000 UTC m=+644.433811139" Feb 17 16:13:55 crc kubenswrapper[4874]: I0217 16:13:55.107229 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" event={"ID":"ff3840bb-f767-4d1f-ae3f-7e39a0c94ef3","Type":"ContainerStarted","Data":"4137e82b4bbc3777ac11141c2e2d2c7dbf51705bb12ecfb7f07988c3545a27f3"} Feb 17 16:13:55 crc kubenswrapper[4874]: I0217 16:13:55.129602 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" podStartSLOduration=2.037511323 podStartE2EDuration="6.129578803s" podCreationTimestamp="2026-02-17 16:13:49 +0000 UTC" firstStartedPulling="2026-02-17 16:13:50.566565276 +0000 UTC m=+640.860953847" lastFinishedPulling="2026-02-17 16:13:54.658632766 +0000 UTC m=+644.953021327" observedRunningTime="2026-02-17 16:13:55.122095318 +0000 UTC m=+645.416483879" watchObservedRunningTime="2026-02-17 16:13:55.129578803 +0000 UTC m=+645.423967404" Feb 17 16:13:55 crc kubenswrapper[4874]: I0217 16:13:55.345501 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" Feb 17 16:14:00 crc kubenswrapper[4874]: I0217 16:14:00.348802 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dzcwt" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.561495 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf"] Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.563097 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.565060 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.575189 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf"] Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.652992 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.653112 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.653320 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtfwz\" (UniqueName: \"kubernetes.io/projected/f088c918-cdda-43a2-aae0-3910c4f0e2b3-kube-api-access-gtfwz\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.753790 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.753844 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.753870 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtfwz\" (UniqueName: \"kubernetes.io/projected/f088c918-cdda-43a2-aae0-3910c4f0e2b3-kube-api-access-gtfwz\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.754342 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.754374 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.781507 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtfwz\" (UniqueName: \"kubernetes.io/projected/f088c918-cdda-43a2-aae0-3910c4f0e2b3-kube-api-access-gtfwz\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.877884 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.959158 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd"] Feb 17 16:14:21 crc kubenswrapper[4874]: I0217 16:14:21.960740 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:21.972051 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd"] Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.058982 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.059460 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsx76\" (UniqueName: \"kubernetes.io/projected/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-kube-api-access-rsx76\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.059550 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.163762 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.163847 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.163886 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsx76\" (UniqueName: \"kubernetes.io/projected/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-kube-api-access-rsx76\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.164587 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.164603 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.199992 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsx76\" (UniqueName: \"kubernetes.io/projected/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-kube-api-access-rsx76\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.243533 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf"] Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.311315 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.341004 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" event={"ID":"f088c918-cdda-43a2-aae0-3910c4f0e2b3","Type":"ContainerStarted","Data":"499bd915a5bbb61276a4b230cd017a3172530503d98d2d8875619e1ca4e28ce7"} Feb 17 16:14:22 crc kubenswrapper[4874]: W0217 16:14:22.822762 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c2f1cb3_dfb2_4a9a_b7be_3ddfa0095218.slice/crio-0260a767f29e9f856b7a7fd744d90ae9595dbde560e3ad6230bb0129e09f10aa WatchSource:0}: Error finding container 0260a767f29e9f856b7a7fd744d90ae9595dbde560e3ad6230bb0129e09f10aa: Status 404 returned error can't find the container with id 0260a767f29e9f856b7a7fd744d90ae9595dbde560e3ad6230bb0129e09f10aa Feb 17 16:14:22 crc kubenswrapper[4874]: I0217 16:14:22.823809 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd"] Feb 17 16:14:23 crc kubenswrapper[4874]: I0217 16:14:23.348107 4874 generic.go:334] "Generic (PLEG): container finished" podID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerID="c97833be93de641be53e18dd20f1fc70cb58d5d25134e5a884ae8b26850c5b53" exitCode=0 Feb 17 16:14:23 crc kubenswrapper[4874]: I0217 16:14:23.348174 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" event={"ID":"f088c918-cdda-43a2-aae0-3910c4f0e2b3","Type":"ContainerDied","Data":"c97833be93de641be53e18dd20f1fc70cb58d5d25134e5a884ae8b26850c5b53"} Feb 17 16:14:23 crc kubenswrapper[4874]: I0217 16:14:23.350255 4874 generic.go:334] "Generic (PLEG): container finished" podID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerID="757bfbc6c149bec55931e5a4456b77a66e11160f14680d16ab18faa3f3baf74e" exitCode=0 Feb 17 16:14:23 crc kubenswrapper[4874]: I0217 16:14:23.350323 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" event={"ID":"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218","Type":"ContainerDied","Data":"757bfbc6c149bec55931e5a4456b77a66e11160f14680d16ab18faa3f3baf74e"} Feb 17 16:14:23 crc kubenswrapper[4874]: I0217 16:14:23.350353 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" event={"ID":"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218","Type":"ContainerStarted","Data":"0260a767f29e9f856b7a7fd744d90ae9595dbde560e3ad6230bb0129e09f10aa"} Feb 17 16:14:27 crc kubenswrapper[4874]: I0217 16:14:27.960979 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" podUID="248f2524-d072-4a9c-8521-17721d2c02a7" containerName="nbdb" probeResult="failure" output=< Feb 17 16:14:27 crc kubenswrapper[4874]: + . /ovnkube-lib/ovnkube-lib.sh Feb 17 16:14:27 crc kubenswrapper[4874]: ++ set -x Feb 17 16:14:27 crc kubenswrapper[4874]: ++ K8S_NODE=crc Feb 17 16:14:27 crc kubenswrapper[4874]: ++ [[ -n crc ]] Feb 17 16:14:27 crc kubenswrapper[4874]: ++ [[ -f /env/crc ]] Feb 17 16:14:27 crc kubenswrapper[4874]: ++ northd_pidfile=/var/run/ovn/ovn-northd.pid Feb 17 16:14:27 crc kubenswrapper[4874]: ++ controller_pidfile=/var/run/ovn/ovn-controller.pid Feb 17 16:14:27 crc kubenswrapper[4874]: ++ controller_logfile=/var/log/ovn/acl-audit-log.log Feb 17 16:14:27 crc kubenswrapper[4874]: ++ vswitch_dbsock=/var/run/openvswitch/db.sock Feb 17 16:14:27 crc kubenswrapper[4874]: ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid Feb 17 16:14:27 crc kubenswrapper[4874]: ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock Feb 17 16:14:27 crc kubenswrapper[4874]: ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl Feb 17 16:14:27 crc kubenswrapper[4874]: ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid Feb 17 16:14:27 crc kubenswrapper[4874]: ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock Feb 17 16:14:27 crc kubenswrapper[4874]: ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl Feb 17 16:14:27 crc kubenswrapper[4874]: + ovndb-readiness-probe nb Feb 17 16:14:27 crc kubenswrapper[4874]: + local dbname=nb Feb 17 16:14:27 crc kubenswrapper[4874]: + [[ 1 -ne 1 ]] Feb 17 16:14:27 crc kubenswrapper[4874]: + local ctlfile Feb 17 16:14:27 crc kubenswrapper[4874]: + [[ nb = \n\b ]] Feb 17 16:14:27 crc kubenswrapper[4874]: + ctlfile=/var/run/ovn/ovnnb_db.ctl Feb 17 16:14:27 crc kubenswrapper[4874]: ++ /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl --timeout=3 ovsdb-server/sync-status Feb 17 16:14:27 crc kubenswrapper[4874]: ++ grep 'state: active' Feb 17 16:14:27 crc kubenswrapper[4874]: ++ false Feb 17 16:14:27 crc kubenswrapper[4874]: + status= Feb 17 16:14:27 crc kubenswrapper[4874]: > Feb 17 16:14:28 crc kubenswrapper[4874]: I0217 16:14:28.013726 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vwlcm" podUID="248f2524-d072-4a9c-8521-17721d2c02a7" containerName="sbdb" probeResult="failure" output=< Feb 17 16:14:28 crc kubenswrapper[4874]: + . /ovnkube-lib/ovnkube-lib.sh Feb 17 16:14:28 crc kubenswrapper[4874]: ++ set -x Feb 17 16:14:28 crc kubenswrapper[4874]: ++ K8S_NODE= Feb 17 16:14:28 crc kubenswrapper[4874]: ++ [[ -n '' ]] Feb 17 16:14:28 crc kubenswrapper[4874]: ++ northd_pidfile=/var/run/ovn/ovn-northd.pid Feb 17 16:14:28 crc kubenswrapper[4874]: ++ controller_pidfile=/var/run/ovn/ovn-controller.pid Feb 17 16:14:28 crc kubenswrapper[4874]: ++ controller_logfile=/var/log/ovn/acl-audit-log.log Feb 17 16:14:28 crc kubenswrapper[4874]: ++ vswitch_dbsock=/var/run/openvswitch/db.sock Feb 17 16:14:28 crc kubenswrapper[4874]: ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid Feb 17 16:14:28 crc kubenswrapper[4874]: ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock Feb 17 16:14:28 crc kubenswrapper[4874]: ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl Feb 17 16:14:28 crc kubenswrapper[4874]: ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid Feb 17 16:14:28 crc kubenswrapper[4874]: ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock Feb 17 16:14:28 crc kubenswrapper[4874]: ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl Feb 17 16:14:28 crc kubenswrapper[4874]: + ovndb-readiness-probe sb Feb 17 16:14:28 crc kubenswrapper[4874]: + local dbname=sb Feb 17 16:14:28 crc kubenswrapper[4874]: + [[ 1 -ne 1 ]] Feb 17 16:14:28 crc kubenswrapper[4874]: + local ctlfile Feb 17 16:14:28 crc kubenswrapper[4874]: + [[ sb = \n\b ]] Feb 17 16:14:28 crc kubenswrapper[4874]: + [[ sb = \s\b ]] Feb 17 16:14:28 crc kubenswrapper[4874]: + ctlfile=/var/run/ovn/ovnsb_db.ctl Feb 17 16:14:28 crc kubenswrapper[4874]: ++ /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl --timeout=3 ovsdb-server/sync-status Feb 17 16:14:28 crc kubenswrapper[4874]: ++ grep 'state: active' Feb 17 16:14:28 crc kubenswrapper[4874]: ++ false Feb 17 16:14:28 crc kubenswrapper[4874]: + status= Feb 17 16:14:28 crc kubenswrapper[4874]: > Feb 17 16:14:31 crc kubenswrapper[4874]: I0217 16:14:31.018814 4874 generic.go:334] "Generic (PLEG): container finished" podID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerID="4a689c492f2a8f21360314ea0d1905a827f71a3967413f6d175dd765c1d9bd4a" exitCode=0 Feb 17 16:14:31 crc kubenswrapper[4874]: I0217 16:14:31.018910 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" event={"ID":"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218","Type":"ContainerDied","Data":"4a689c492f2a8f21360314ea0d1905a827f71a3967413f6d175dd765c1d9bd4a"} Feb 17 16:14:31 crc kubenswrapper[4874]: I0217 16:14:31.022234 4874 generic.go:334] "Generic (PLEG): container finished" podID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerID="f4ec321dba90e2790083925e2a7e81eccb13eaf114a5d4f95be1220500293a0b" exitCode=0 Feb 17 16:14:31 crc kubenswrapper[4874]: I0217 16:14:31.022284 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" event={"ID":"f088c918-cdda-43a2-aae0-3910c4f0e2b3","Type":"ContainerDied","Data":"f4ec321dba90e2790083925e2a7e81eccb13eaf114a5d4f95be1220500293a0b"} Feb 17 16:14:32 crc kubenswrapper[4874]: I0217 16:14:32.033289 4874 generic.go:334] "Generic (PLEG): container finished" podID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerID="e00ff896cebbe616f9fd001924277d6e0d8dec33e390d7f95e101564b386ca90" exitCode=0 Feb 17 16:14:32 crc kubenswrapper[4874]: I0217 16:14:32.033401 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" event={"ID":"f088c918-cdda-43a2-aae0-3910c4f0e2b3","Type":"ContainerDied","Data":"e00ff896cebbe616f9fd001924277d6e0d8dec33e390d7f95e101564b386ca90"} Feb 17 16:14:32 crc kubenswrapper[4874]: I0217 16:14:32.036198 4874 generic.go:334] "Generic (PLEG): container finished" podID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerID="d0224d4471190e966ed4a995e308df3e011aa77eadc65143f8909489da01fd7c" exitCode=0 Feb 17 16:14:32 crc kubenswrapper[4874]: I0217 16:14:32.036255 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" event={"ID":"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218","Type":"ContainerDied","Data":"d0224d4471190e966ed4a995e308df3e011aa77eadc65143f8909489da01fd7c"} Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.380231 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.382936 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.435436 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-util\") pod \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.435570 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-util\") pod \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.435669 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsx76\" (UniqueName: \"kubernetes.io/projected/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-kube-api-access-rsx76\") pod \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.442501 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtfwz\" (UniqueName: \"kubernetes.io/projected/f088c918-cdda-43a2-aae0-3910c4f0e2b3-kube-api-access-gtfwz\") pod \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.442725 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-bundle\") pod \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\" (UID: \"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218\") " Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.442865 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-bundle\") pod \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\" (UID: \"f088c918-cdda-43a2-aae0-3910c4f0e2b3\") " Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.443902 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-bundle" (OuterVolumeSpecName: "bundle") pod "f088c918-cdda-43a2-aae0-3910c4f0e2b3" (UID: "f088c918-cdda-43a2-aae0-3910c4f0e2b3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.444357 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-bundle" (OuterVolumeSpecName: "bundle") pod "5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" (UID: "5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.445912 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.446071 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.452854 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f088c918-cdda-43a2-aae0-3910c4f0e2b3-kube-api-access-gtfwz" (OuterVolumeSpecName: "kube-api-access-gtfwz") pod "f088c918-cdda-43a2-aae0-3910c4f0e2b3" (UID: "f088c918-cdda-43a2-aae0-3910c4f0e2b3"). InnerVolumeSpecName "kube-api-access-gtfwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.452864 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-kube-api-access-rsx76" (OuterVolumeSpecName: "kube-api-access-rsx76") pod "5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" (UID: "5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218"). InnerVolumeSpecName "kube-api-access-rsx76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.455402 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-util" (OuterVolumeSpecName: "util") pod "f088c918-cdda-43a2-aae0-3910c4f0e2b3" (UID: "f088c918-cdda-43a2-aae0-3910c4f0e2b3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.461946 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-util" (OuterVolumeSpecName: "util") pod "5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" (UID: "5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.547863 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsx76\" (UniqueName: \"kubernetes.io/projected/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-kube-api-access-rsx76\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.547932 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtfwz\" (UniqueName: \"kubernetes.io/projected/f088c918-cdda-43a2-aae0-3910c4f0e2b3-kube-api-access-gtfwz\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.547961 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f088c918-cdda-43a2-aae0-3910c4f0e2b3-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4874]: I0217 16:14:33.547985 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:34 crc kubenswrapper[4874]: I0217 16:14:34.055711 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" event={"ID":"f088c918-cdda-43a2-aae0-3910c4f0e2b3","Type":"ContainerDied","Data":"499bd915a5bbb61276a4b230cd017a3172530503d98d2d8875619e1ca4e28ce7"} Feb 17 16:14:34 crc kubenswrapper[4874]: I0217 16:14:34.055985 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="499bd915a5bbb61276a4b230cd017a3172530503d98d2d8875619e1ca4e28ce7" Feb 17 16:14:34 crc kubenswrapper[4874]: I0217 16:14:34.055771 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf" Feb 17 16:14:34 crc kubenswrapper[4874]: I0217 16:14:34.059266 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" event={"ID":"5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218","Type":"ContainerDied","Data":"0260a767f29e9f856b7a7fd744d90ae9595dbde560e3ad6230bb0129e09f10aa"} Feb 17 16:14:34 crc kubenswrapper[4874]: I0217 16:14:34.059323 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0260a767f29e9f856b7a7fd744d90ae9595dbde560e3ad6230bb0129e09f10aa" Feb 17 16:14:34 crc kubenswrapper[4874]: I0217 16:14:34.059332 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.879582 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-5gdhf"] Feb 17 16:14:37 crc kubenswrapper[4874]: E0217 16:14:37.882498 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerName="extract" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.882706 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerName="extract" Feb 17 16:14:37 crc kubenswrapper[4874]: E0217 16:14:37.882926 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerName="pull" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.883030 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerName="pull" Feb 17 16:14:37 crc kubenswrapper[4874]: E0217 16:14:37.883174 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerName="pull" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.883279 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerName="pull" Feb 17 16:14:37 crc kubenswrapper[4874]: E0217 16:14:37.883375 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerName="util" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.883470 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerName="util" Feb 17 16:14:37 crc kubenswrapper[4874]: E0217 16:14:37.883584 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerName="util" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.883673 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerName="util" Feb 17 16:14:37 crc kubenswrapper[4874]: E0217 16:14:37.883772 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerName="extract" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.883882 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerName="extract" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.884205 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f088c918-cdda-43a2-aae0-3910c4f0e2b3" containerName="extract" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.884336 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218" containerName="extract" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.886281 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-5gdhf" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.889542 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.889866 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.890111 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-kt72m" Feb 17 16:14:37 crc kubenswrapper[4874]: I0217 16:14:37.890700 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-5gdhf"] Feb 17 16:14:38 crc kubenswrapper[4874]: I0217 16:14:38.012162 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcchs\" (UniqueName: \"kubernetes.io/projected/3145a5e0-7e93-479a-b4f2-c7082813a0bf-kube-api-access-tcchs\") pod \"cluster-logging-operator-c769fd969-5gdhf\" (UID: \"3145a5e0-7e93-479a-b4f2-c7082813a0bf\") " pod="openshift-logging/cluster-logging-operator-c769fd969-5gdhf" Feb 17 16:14:38 crc kubenswrapper[4874]: I0217 16:14:38.113982 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcchs\" (UniqueName: \"kubernetes.io/projected/3145a5e0-7e93-479a-b4f2-c7082813a0bf-kube-api-access-tcchs\") pod \"cluster-logging-operator-c769fd969-5gdhf\" (UID: \"3145a5e0-7e93-479a-b4f2-c7082813a0bf\") " pod="openshift-logging/cluster-logging-operator-c769fd969-5gdhf" Feb 17 16:14:38 crc kubenswrapper[4874]: I0217 16:14:38.135209 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcchs\" (UniqueName: \"kubernetes.io/projected/3145a5e0-7e93-479a-b4f2-c7082813a0bf-kube-api-access-tcchs\") pod \"cluster-logging-operator-c769fd969-5gdhf\" (UID: \"3145a5e0-7e93-479a-b4f2-c7082813a0bf\") " pod="openshift-logging/cluster-logging-operator-c769fd969-5gdhf" Feb 17 16:14:38 crc kubenswrapper[4874]: I0217 16:14:38.255922 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-5gdhf" Feb 17 16:14:38 crc kubenswrapper[4874]: I0217 16:14:38.712965 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-5gdhf"] Feb 17 16:14:38 crc kubenswrapper[4874]: W0217 16:14:38.718632 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3145a5e0_7e93_479a_b4f2_c7082813a0bf.slice/crio-19974c891ca0b3e94781f0c978ba69025d8b754472529e8c2380197f3b84612f WatchSource:0}: Error finding container 19974c891ca0b3e94781f0c978ba69025d8b754472529e8c2380197f3b84612f: Status 404 returned error can't find the container with id 19974c891ca0b3e94781f0c978ba69025d8b754472529e8c2380197f3b84612f Feb 17 16:14:39 crc kubenswrapper[4874]: I0217 16:14:39.094438 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-5gdhf" event={"ID":"3145a5e0-7e93-479a-b4f2-c7082813a0bf","Type":"ContainerStarted","Data":"19974c891ca0b3e94781f0c978ba69025d8b754472529e8c2380197f3b84612f"} Feb 17 16:14:44 crc kubenswrapper[4874]: I0217 16:14:44.127807 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-5gdhf" event={"ID":"3145a5e0-7e93-479a-b4f2-c7082813a0bf","Type":"ContainerStarted","Data":"16f54a5ef98eccb1c062068c7a9a68590f3b48083cd74fd6065b73f0aa873da4"} Feb 17 16:14:44 crc kubenswrapper[4874]: I0217 16:14:44.147346 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-5gdhf" podStartSLOduration=2.119467403 podStartE2EDuration="7.147331022s" podCreationTimestamp="2026-02-17 16:14:37 +0000 UTC" firstStartedPulling="2026-02-17 16:14:38.721855892 +0000 UTC m=+689.016244463" lastFinishedPulling="2026-02-17 16:14:43.749719521 +0000 UTC m=+694.044108082" observedRunningTime="2026-02-17 16:14:44.143720351 +0000 UTC m=+694.438108912" watchObservedRunningTime="2026-02-17 16:14:44.147331022 +0000 UTC m=+694.441719573" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.011173 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9"] Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.012759 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.014981 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.015185 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.015309 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.016320 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-wl42w" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.017438 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.021139 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.050921 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9"] Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.097772 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e55e7660-9281-484b-b0b8-a39236b8e692-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.097825 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e55e7660-9281-484b-b0b8-a39236b8e692-webhook-cert\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.097851 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e55e7660-9281-484b-b0b8-a39236b8e692-apiservice-cert\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.097918 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjzhn\" (UniqueName: \"kubernetes.io/projected/e55e7660-9281-484b-b0b8-a39236b8e692-kube-api-access-vjzhn\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.097959 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e55e7660-9281-484b-b0b8-a39236b8e692-manager-config\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.198635 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjzhn\" (UniqueName: \"kubernetes.io/projected/e55e7660-9281-484b-b0b8-a39236b8e692-kube-api-access-vjzhn\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.198723 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e55e7660-9281-484b-b0b8-a39236b8e692-manager-config\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.198756 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e55e7660-9281-484b-b0b8-a39236b8e692-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.198786 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e55e7660-9281-484b-b0b8-a39236b8e692-webhook-cert\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.198819 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e55e7660-9281-484b-b0b8-a39236b8e692-apiservice-cert\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.199991 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e55e7660-9281-484b-b0b8-a39236b8e692-manager-config\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.204593 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e55e7660-9281-484b-b0b8-a39236b8e692-webhook-cert\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.205787 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e55e7660-9281-484b-b0b8-a39236b8e692-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.217539 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjzhn\" (UniqueName: \"kubernetes.io/projected/e55e7660-9281-484b-b0b8-a39236b8e692-kube-api-access-vjzhn\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.217649 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e55e7660-9281-484b-b0b8-a39236b8e692-apiservice-cert\") pod \"loki-operator-controller-manager-745c8c7958-q4zx9\" (UID: \"e55e7660-9281-484b-b0b8-a39236b8e692\") " pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.361023 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:14:50 crc kubenswrapper[4874]: I0217 16:14:50.628380 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9"] Feb 17 16:14:50 crc kubenswrapper[4874]: W0217 16:14:50.640030 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode55e7660_9281_484b_b0b8_a39236b8e692.slice/crio-e31a1d626977d6fa959ea4bb75e2c771e16d04d7ae981dd30413ee47f67f0549 WatchSource:0}: Error finding container e31a1d626977d6fa959ea4bb75e2c771e16d04d7ae981dd30413ee47f67f0549: Status 404 returned error can't find the container with id e31a1d626977d6fa959ea4bb75e2c771e16d04d7ae981dd30413ee47f67f0549 Feb 17 16:14:51 crc kubenswrapper[4874]: I0217 16:14:51.170435 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" event={"ID":"e55e7660-9281-484b-b0b8-a39236b8e692","Type":"ContainerStarted","Data":"e31a1d626977d6fa959ea4bb75e2c771e16d04d7ae981dd30413ee47f67f0549"} Feb 17 16:14:54 crc kubenswrapper[4874]: I0217 16:14:54.205776 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" event={"ID":"e55e7660-9281-484b-b0b8-a39236b8e692","Type":"ContainerStarted","Data":"c0c70755919e2178ccc8f8d4328b8882d60b418a446e8857e6d0811a639e3efc"} Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.149714 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d"] Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.151042 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.155888 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.155888 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.171130 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d"] Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.238521 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd54bcd1-35a2-4582-adab-a0926f977ae8-secret-volume\") pod \"collect-profiles-29522415-79m2d\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.238671 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd54bcd1-35a2-4582-adab-a0926f977ae8-config-volume\") pod \"collect-profiles-29522415-79m2d\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.238723 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqqnt\" (UniqueName: \"kubernetes.io/projected/bd54bcd1-35a2-4582-adab-a0926f977ae8-kube-api-access-gqqnt\") pod \"collect-profiles-29522415-79m2d\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.339902 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd54bcd1-35a2-4582-adab-a0926f977ae8-config-volume\") pod \"collect-profiles-29522415-79m2d\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.339959 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqqnt\" (UniqueName: \"kubernetes.io/projected/bd54bcd1-35a2-4582-adab-a0926f977ae8-kube-api-access-gqqnt\") pod \"collect-profiles-29522415-79m2d\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.340002 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd54bcd1-35a2-4582-adab-a0926f977ae8-secret-volume\") pod \"collect-profiles-29522415-79m2d\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.341147 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd54bcd1-35a2-4582-adab-a0926f977ae8-config-volume\") pod \"collect-profiles-29522415-79m2d\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.351051 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd54bcd1-35a2-4582-adab-a0926f977ae8-secret-volume\") pod \"collect-profiles-29522415-79m2d\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.369707 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqqnt\" (UniqueName: \"kubernetes.io/projected/bd54bcd1-35a2-4582-adab-a0926f977ae8-kube-api-access-gqqnt\") pod \"collect-profiles-29522415-79m2d\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:00 crc kubenswrapper[4874]: I0217 16:15:00.472611 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:02 crc kubenswrapper[4874]: I0217 16:15:02.520281 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d"] Feb 17 16:15:02 crc kubenswrapper[4874]: W0217 16:15:02.520919 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd54bcd1_35a2_4582_adab_a0926f977ae8.slice/crio-8e9ae20b3f30daba5dfef77b95393409d480833fee7fd36b76d73d7dc05e93f6 WatchSource:0}: Error finding container 8e9ae20b3f30daba5dfef77b95393409d480833fee7fd36b76d73d7dc05e93f6: Status 404 returned error can't find the container with id 8e9ae20b3f30daba5dfef77b95393409d480833fee7fd36b76d73d7dc05e93f6 Feb 17 16:15:03 crc kubenswrapper[4874]: I0217 16:15:03.296271 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" event={"ID":"e55e7660-9281-484b-b0b8-a39236b8e692","Type":"ContainerStarted","Data":"7f7b7a628b5c8fd50d46ef6f5d3bf50ed40441fcf2cd7fa32fa76c7296f0de60"} Feb 17 16:15:03 crc kubenswrapper[4874]: I0217 16:15:03.296608 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:15:03 crc kubenswrapper[4874]: I0217 16:15:03.298500 4874 generic.go:334] "Generic (PLEG): container finished" podID="bd54bcd1-35a2-4582-adab-a0926f977ae8" containerID="6db37ebfe0bb9479cbd22bf667fa588d596c12ab1b724c719dde6332a3c41f74" exitCode=0 Feb 17 16:15:03 crc kubenswrapper[4874]: I0217 16:15:03.298554 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" event={"ID":"bd54bcd1-35a2-4582-adab-a0926f977ae8","Type":"ContainerDied","Data":"6db37ebfe0bb9479cbd22bf667fa588d596c12ab1b724c719dde6332a3c41f74"} Feb 17 16:15:03 crc kubenswrapper[4874]: I0217 16:15:03.298578 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" event={"ID":"bd54bcd1-35a2-4582-adab-a0926f977ae8","Type":"ContainerStarted","Data":"8e9ae20b3f30daba5dfef77b95393409d480833fee7fd36b76d73d7dc05e93f6"} Feb 17 16:15:03 crc kubenswrapper[4874]: I0217 16:15:03.299822 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" Feb 17 16:15:03 crc kubenswrapper[4874]: I0217 16:15:03.323705 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-745c8c7958-q4zx9" podStartSLOduration=2.764236056 podStartE2EDuration="14.323684738s" podCreationTimestamp="2026-02-17 16:14:49 +0000 UTC" firstStartedPulling="2026-02-17 16:14:50.641743371 +0000 UTC m=+700.936131932" lastFinishedPulling="2026-02-17 16:15:02.201192053 +0000 UTC m=+712.495580614" observedRunningTime="2026-02-17 16:15:03.318613581 +0000 UTC m=+713.613002212" watchObservedRunningTime="2026-02-17 16:15:03.323684738 +0000 UTC m=+713.618073299" Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.568953 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.601818 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd54bcd1-35a2-4582-adab-a0926f977ae8-config-volume\") pod \"bd54bcd1-35a2-4582-adab-a0926f977ae8\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.602021 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqqnt\" (UniqueName: \"kubernetes.io/projected/bd54bcd1-35a2-4582-adab-a0926f977ae8-kube-api-access-gqqnt\") pod \"bd54bcd1-35a2-4582-adab-a0926f977ae8\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.602054 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd54bcd1-35a2-4582-adab-a0926f977ae8-secret-volume\") pod \"bd54bcd1-35a2-4582-adab-a0926f977ae8\" (UID: \"bd54bcd1-35a2-4582-adab-a0926f977ae8\") " Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.602505 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd54bcd1-35a2-4582-adab-a0926f977ae8-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd54bcd1-35a2-4582-adab-a0926f977ae8" (UID: "bd54bcd1-35a2-4582-adab-a0926f977ae8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.613231 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd54bcd1-35a2-4582-adab-a0926f977ae8-kube-api-access-gqqnt" (OuterVolumeSpecName: "kube-api-access-gqqnt") pod "bd54bcd1-35a2-4582-adab-a0926f977ae8" (UID: "bd54bcd1-35a2-4582-adab-a0926f977ae8"). InnerVolumeSpecName "kube-api-access-gqqnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.614486 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd54bcd1-35a2-4582-adab-a0926f977ae8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd54bcd1-35a2-4582-adab-a0926f977ae8" (UID: "bd54bcd1-35a2-4582-adab-a0926f977ae8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.703754 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqqnt\" (UniqueName: \"kubernetes.io/projected/bd54bcd1-35a2-4582-adab-a0926f977ae8-kube-api-access-gqqnt\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.703789 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd54bcd1-35a2-4582-adab-a0926f977ae8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:04 crc kubenswrapper[4874]: I0217 16:15:04.703802 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd54bcd1-35a2-4582-adab-a0926f977ae8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:05 crc kubenswrapper[4874]: I0217 16:15:05.311361 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" event={"ID":"bd54bcd1-35a2-4582-adab-a0926f977ae8","Type":"ContainerDied","Data":"8e9ae20b3f30daba5dfef77b95393409d480833fee7fd36b76d73d7dc05e93f6"} Feb 17 16:15:05 crc kubenswrapper[4874]: I0217 16:15:05.311415 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e9ae20b3f30daba5dfef77b95393409d480833fee7fd36b76d73d7dc05e93f6" Feb 17 16:15:05 crc kubenswrapper[4874]: I0217 16:15:05.311378 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d" Feb 17 16:15:07 crc kubenswrapper[4874]: I0217 16:15:07.963825 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 17 16:15:07 crc kubenswrapper[4874]: E0217 16:15:07.964476 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd54bcd1-35a2-4582-adab-a0926f977ae8" containerName="collect-profiles" Feb 17 16:15:07 crc kubenswrapper[4874]: I0217 16:15:07.964493 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd54bcd1-35a2-4582-adab-a0926f977ae8" containerName="collect-profiles" Feb 17 16:15:07 crc kubenswrapper[4874]: I0217 16:15:07.964634 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd54bcd1-35a2-4582-adab-a0926f977ae8" containerName="collect-profiles" Feb 17 16:15:07 crc kubenswrapper[4874]: I0217 16:15:07.965138 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:15:07 crc kubenswrapper[4874]: I0217 16:15:07.970307 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 17 16:15:07 crc kubenswrapper[4874]: I0217 16:15:07.972487 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 17 16:15:07 crc kubenswrapper[4874]: I0217 16:15:07.979499 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.055008 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmv2m\" (UniqueName: \"kubernetes.io/projected/0e636b88-5228-471f-b0de-7ef5e2fcef31-kube-api-access-rmv2m\") pod \"minio\" (UID: \"0e636b88-5228-471f-b0de-7ef5e2fcef31\") " pod="minio-dev/minio" Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.056605 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-abea5954-2afc-49f0-8b69-f56605fb7123\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abea5954-2afc-49f0-8b69-f56605fb7123\") pod \"minio\" (UID: \"0e636b88-5228-471f-b0de-7ef5e2fcef31\") " pod="minio-dev/minio" Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.158403 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmv2m\" (UniqueName: \"kubernetes.io/projected/0e636b88-5228-471f-b0de-7ef5e2fcef31-kube-api-access-rmv2m\") pod \"minio\" (UID: \"0e636b88-5228-471f-b0de-7ef5e2fcef31\") " pod="minio-dev/minio" Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.158528 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-abea5954-2afc-49f0-8b69-f56605fb7123\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abea5954-2afc-49f0-8b69-f56605fb7123\") pod \"minio\" (UID: \"0e636b88-5228-471f-b0de-7ef5e2fcef31\") " pod="minio-dev/minio" Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.163695 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.163760 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-abea5954-2afc-49f0-8b69-f56605fb7123\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abea5954-2afc-49f0-8b69-f56605fb7123\") pod \"minio\" (UID: \"0e636b88-5228-471f-b0de-7ef5e2fcef31\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/28e09cac5cfcc68719c8fc36e7448bc3b5c42e2b32bcb0bc7d005a3fe62f64e9/globalmount\"" pod="minio-dev/minio" Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.195432 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-abea5954-2afc-49f0-8b69-f56605fb7123\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abea5954-2afc-49f0-8b69-f56605fb7123\") pod \"minio\" (UID: \"0e636b88-5228-471f-b0de-7ef5e2fcef31\") " pod="minio-dev/minio" Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.206840 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmv2m\" (UniqueName: \"kubernetes.io/projected/0e636b88-5228-471f-b0de-7ef5e2fcef31-kube-api-access-rmv2m\") pod \"minio\" (UID: \"0e636b88-5228-471f-b0de-7ef5e2fcef31\") " pod="minio-dev/minio" Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.290477 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:15:08 crc kubenswrapper[4874]: I0217 16:15:08.500927 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:15:09 crc kubenswrapper[4874]: I0217 16:15:09.358754 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"0e636b88-5228-471f-b0de-7ef5e2fcef31","Type":"ContainerStarted","Data":"4976e5086f052551b7aefd592e30f7516df8db66be77337f881a52721c8b46e9"} Feb 17 16:15:14 crc kubenswrapper[4874]: I0217 16:15:14.397570 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"0e636b88-5228-471f-b0de-7ef5e2fcef31","Type":"ContainerStarted","Data":"686f02bbeb88bfbae85fe8d41723defd40f1e78c1b97ec1665fc22ee763b3e7a"} Feb 17 16:15:14 crc kubenswrapper[4874]: I0217 16:15:14.428465 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.916773609 podStartE2EDuration="9.428446518s" podCreationTimestamp="2026-02-17 16:15:05 +0000 UTC" firstStartedPulling="2026-02-17 16:15:08.518617645 +0000 UTC m=+718.813006206" lastFinishedPulling="2026-02-17 16:15:14.030290554 +0000 UTC m=+724.324679115" observedRunningTime="2026-02-17 16:15:14.424454248 +0000 UTC m=+724.718842819" watchObservedRunningTime="2026-02-17 16:15:14.428446518 +0000 UTC m=+724.722835089" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.531915 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh"] Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.533137 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.538999 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-5h9tr" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.539247 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.539548 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.539836 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.545859 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.559930 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh"] Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.655146 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtpk4\" (UniqueName: \"kubernetes.io/projected/d60c9d45-c4f3-4702-a479-c98e249e2eb4-kube-api-access-jtpk4\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.655199 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60c9d45-c4f3-4702-a479-c98e249e2eb4-config\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.655228 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d60c9d45-c4f3-4702-a479-c98e249e2eb4-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.655359 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d60c9d45-c4f3-4702-a479-c98e249e2eb4-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.655384 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d60c9d45-c4f3-4702-a479-c98e249e2eb4-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.699879 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn"] Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.701207 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.714584 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn"] Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.717650 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.717920 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.718092 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.757115 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d60c9d45-c4f3-4702-a479-c98e249e2eb4-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.757172 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d60c9d45-c4f3-4702-a479-c98e249e2eb4-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.757196 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtpk4\" (UniqueName: \"kubernetes.io/projected/d60c9d45-c4f3-4702-a479-c98e249e2eb4-kube-api-access-jtpk4\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.757231 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60c9d45-c4f3-4702-a479-c98e249e2eb4-config\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.757255 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d60c9d45-c4f3-4702-a479-c98e249e2eb4-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.758671 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d60c9d45-c4f3-4702-a479-c98e249e2eb4-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.762674 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d60c9d45-c4f3-4702-a479-c98e249e2eb4-config\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.773064 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d60c9d45-c4f3-4702-a479-c98e249e2eb4-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.786495 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d60c9d45-c4f3-4702-a479-c98e249e2eb4-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.788899 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtpk4\" (UniqueName: \"kubernetes.io/projected/d60c9d45-c4f3-4702-a479-c98e249e2eb4-kube-api-access-jtpk4\") pod \"logging-loki-distributor-5d5548c9f5-b69gh\" (UID: \"d60c9d45-c4f3-4702-a479-c98e249e2eb4\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.844263 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr"] Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.846135 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.848485 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.849057 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.851055 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.859857 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr"] Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.862942 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.863912 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.864040 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.864244 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.864350 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-config\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.864418 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpf5r\" (UniqueName: \"kubernetes.io/projected/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-kube-api-access-tpf5r\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966184 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966225 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/c2549768-f32d-4e6e-91f7-9ba31ddd5998-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966285 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966306 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/c2549768-f32d-4e6e-91f7-9ba31ddd5998-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966342 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggmn\" (UniqueName: \"kubernetes.io/projected/c2549768-f32d-4e6e-91f7-9ba31ddd5998-kube-api-access-zggmn\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966366 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966399 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-config\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966416 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpf5r\" (UniqueName: \"kubernetes.io/projected/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-kube-api-access-tpf5r\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966444 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966462 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2549768-f32d-4e6e-91f7-9ba31ddd5998-config\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.966480 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2549768-f32d-4e6e-91f7-9ba31ddd5998-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.968007 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-config\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.969530 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-595f794c55-vbzmt"] Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.970126 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.970495 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.970711 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.975548 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.975971 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.976030 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-tgzls" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.976257 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.976259 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.976404 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.976508 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.983350 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-595f794c55-vbzmt"] Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.985196 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpf5r\" (UniqueName: \"kubernetes.io/projected/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-kube-api-access-tpf5r\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.985979 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-qkpmn\" (UID: \"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.990153 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-595f794c55-tvvjh"] Feb 17 16:15:20 crc kubenswrapper[4874]: I0217 16:15:20.992619 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.000370 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-595f794c55-tvvjh"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.038922 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.067443 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2549768-f32d-4e6e-91f7-9ba31ddd5998-config\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.067482 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2549768-f32d-4e6e-91f7-9ba31ddd5998-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.067519 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/c2549768-f32d-4e6e-91f7-9ba31ddd5998-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.067584 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/c2549768-f32d-4e6e-91f7-9ba31ddd5998-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.067836 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zggmn\" (UniqueName: \"kubernetes.io/projected/c2549768-f32d-4e6e-91f7-9ba31ddd5998-kube-api-access-zggmn\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.069109 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2549768-f32d-4e6e-91f7-9ba31ddd5998-config\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.069651 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2549768-f32d-4e6e-91f7-9ba31ddd5998-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.078605 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/c2549768-f32d-4e6e-91f7-9ba31ddd5998-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.079035 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/c2549768-f32d-4e6e-91f7-9ba31ddd5998-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.086845 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zggmn\" (UniqueName: \"kubernetes.io/projected/c2549768-f32d-4e6e-91f7-9ba31ddd5998-kube-api-access-zggmn\") pod \"logging-loki-query-frontend-6d6859c548-2p4zr\" (UID: \"c2549768-f32d-4e6e-91f7-9ba31ddd5998\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.171926 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-tls-secret\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172053 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172156 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-rbac\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172190 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172212 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172239 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-lokistack-gateway\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172261 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-tls-secret\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172328 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn7rr\" (UniqueName: \"kubernetes.io/projected/7ac7d0ae-7505-401d-a9cc-49094832b8c7-kube-api-access-dn7rr\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172350 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-lokistack-gateway\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172365 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-tenants\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172384 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172408 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gr27\" (UniqueName: \"kubernetes.io/projected/641c0952-226b-4374-b247-f7e6a67f6cc8-kube-api-access-6gr27\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172633 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-rbac\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172770 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172810 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.172834 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-tenants\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.190381 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.229752 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274277 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274324 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274349 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-lokistack-gateway\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274370 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-tls-secret\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274392 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-lokistack-gateway\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274439 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn7rr\" (UniqueName: \"kubernetes.io/projected/7ac7d0ae-7505-401d-a9cc-49094832b8c7-kube-api-access-dn7rr\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274455 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-tenants\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274494 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274519 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gr27\" (UniqueName: \"kubernetes.io/projected/641c0952-226b-4374-b247-f7e6a67f6cc8-kube-api-access-6gr27\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274545 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-rbac\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274584 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274603 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274618 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-tenants\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274641 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-rbac\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274656 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-tls-secret\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.274670 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: E0217 16:15:21.275155 4874 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 17 16:15:21 crc kubenswrapper[4874]: E0217 16:15:21.275226 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-tls-secret podName:641c0952-226b-4374-b247-f7e6a67f6cc8 nodeName:}" failed. No retries permitted until 2026-02-17 16:15:21.775205508 +0000 UTC m=+732.069594069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-tls-secret") pod "logging-loki-gateway-595f794c55-vbzmt" (UID: "641c0952-226b-4374-b247-f7e6a67f6cc8") : secret "logging-loki-gateway-http" not found Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.275337 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.275512 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-lokistack-gateway\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.275535 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.275567 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-lokistack-gateway\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: E0217 16:15:21.275611 4874 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 17 16:15:21 crc kubenswrapper[4874]: E0217 16:15:21.275653 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-tls-secret podName:7ac7d0ae-7505-401d-a9cc-49094832b8c7 nodeName:}" failed. No retries permitted until 2026-02-17 16:15:21.775637759 +0000 UTC m=+732.070026320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-tls-secret") pod "logging-loki-gateway-595f794c55-tvvjh" (UID: "7ac7d0ae-7505-401d-a9cc-49094832b8c7") : secret "logging-loki-gateway-http" not found Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.276136 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-rbac\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.276300 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ac7d0ae-7505-401d-a9cc-49094832b8c7-logging-loki-ca-bundle\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.276319 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.278255 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/641c0952-226b-4374-b247-f7e6a67f6cc8-rbac\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.278919 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-tenants\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.284747 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.287268 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-tenants\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.292628 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.295654 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn7rr\" (UniqueName: \"kubernetes.io/projected/7ac7d0ae-7505-401d-a9cc-49094832b8c7-kube-api-access-dn7rr\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.295729 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gr27\" (UniqueName: \"kubernetes.io/projected/641c0952-226b-4374-b247-f7e6a67f6cc8-kube-api-access-6gr27\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.452292 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" event={"ID":"d60c9d45-c4f3-4702-a479-c98e249e2eb4","Type":"ContainerStarted","Data":"1b6c3f7302a360e9a8eb9ba166ad66da82adf039211454ea222e539507c4cec0"} Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.552543 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.680908 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.682625 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.684747 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.684980 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.698996 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.725197 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.774200 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.775318 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.780064 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.780170 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.782958 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-tls-secret\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.783049 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-tls-secret\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.790141 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/641c0952-226b-4374-b247-f7e6a67f6cc8-tls-secret\") pod \"logging-loki-gateway-595f794c55-vbzmt\" (UID: \"641c0952-226b-4374-b247-f7e6a67f6cc8\") " pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.791031 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/7ac7d0ae-7505-401d-a9cc-49094832b8c7-tls-secret\") pod \"logging-loki-gateway-595f794c55-tvvjh\" (UID: \"7ac7d0ae-7505-401d-a9cc-49094832b8c7\") " pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.797728 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884032 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef4bab3a-2a5a-4c0a-ab9f-1e2e4862bb1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef4bab3a-2a5a-4c0a-ab9f-1e2e4862bb1a\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884118 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d776a8-9060-4156-931f-fcbe335a8488-config\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884152 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f54572-957d-428e-9c13-0f45aa7dc6e5-config\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884179 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884208 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d9535b2-ab8e-4389-9e4c-c113d6e69419\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d9535b2-ab8e-4389-9e4c-c113d6e69419\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884236 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884275 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884300 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wxzq\" (UniqueName: \"kubernetes.io/projected/f0d776a8-9060-4156-931f-fcbe335a8488-kube-api-access-8wxzq\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884325 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884352 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884385 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee3548b4-0553-4165-a8c1-59bd33cfa3a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee3548b4-0553-4165-a8c1-59bd33cfa3a5\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884410 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5j4\" (UniqueName: \"kubernetes.io/projected/e5f54572-957d-428e-9c13-0f45aa7dc6e5-kube-api-access-qd5j4\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884431 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884666 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.884808 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.886325 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.887240 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.889403 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.891189 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.903478 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.961263 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.972305 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.985585 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wxzq\" (UniqueName: \"kubernetes.io/projected/f0d776a8-9060-4156-931f-fcbe335a8488-kube-api-access-8wxzq\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.985631 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.985650 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.985671 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.985695 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.985718 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d061686d-aa09-47f9-b9df-14af57b63100\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d061686d-aa09-47f9-b9df-14af57b63100\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.985740 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef4bab3a-2a5a-4c0a-ab9f-1e2e4862bb1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef4bab3a-2a5a-4c0a-ab9f-1e2e4862bb1a\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.985761 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d776a8-9060-4156-931f-fcbe335a8488-config\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987140 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f54572-957d-428e-9c13-0f45aa7dc6e5-config\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987186 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhr2\" (UniqueName: \"kubernetes.io/projected/5215c52d-dda2-4bf6-bf99-dffdcc73f289-kube-api-access-djhr2\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987206 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987229 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987256 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987278 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987297 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987319 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee3548b4-0553-4165-a8c1-59bd33cfa3a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee3548b4-0553-4165-a8c1-59bd33cfa3a5\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987337 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd5j4\" (UniqueName: \"kubernetes.io/projected/e5f54572-957d-428e-9c13-0f45aa7dc6e5-kube-api-access-qd5j4\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987361 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5215c52d-dda2-4bf6-bf99-dffdcc73f289-config\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987393 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987419 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987439 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d9535b2-ab8e-4389-9e4c-c113d6e69419\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d9535b2-ab8e-4389-9e4c-c113d6e69419\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.987461 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.988191 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.990883 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f54572-957d-428e-9c13-0f45aa7dc6e5-config\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.991317 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0d776a8-9060-4156-931f-fcbe335a8488-config\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.991761 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.992509 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.995044 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.995110 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.995110 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d9535b2-ab8e-4389-9e4c-c113d6e69419\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d9535b2-ab8e-4389-9e4c-c113d6e69419\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c86214d2c4cfafee956c3a1df8eb6e6e548d4b65ac1b5c4de943abe7fbe48e9d/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.995163 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef4bab3a-2a5a-4c0a-ab9f-1e2e4862bb1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef4bab3a-2a5a-4c0a-ab9f-1e2e4862bb1a\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/82e138f87f37b7942ac809e8eaa47b5e008d8f60235494b34ee3dd05754302e6/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.995110 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.995308 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee3548b4-0553-4165-a8c1-59bd33cfa3a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee3548b4-0553-4165-a8c1-59bd33cfa3a5\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/037bedd6f41fb3c662cf590017bdfd1ff78d78813765ef3c6fece73e65617ee8/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.995516 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:21 crc kubenswrapper[4874]: I0217 16:15:21.998991 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.001881 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.002609 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e5f54572-957d-428e-9c13-0f45aa7dc6e5-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.003172 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f0d776a8-9060-4156-931f-fcbe335a8488-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.010843 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wxzq\" (UniqueName: \"kubernetes.io/projected/f0d776a8-9060-4156-931f-fcbe335a8488-kube-api-access-8wxzq\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.017777 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd5j4\" (UniqueName: \"kubernetes.io/projected/e5f54572-957d-428e-9c13-0f45aa7dc6e5-kube-api-access-qd5j4\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.025032 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef4bab3a-2a5a-4c0a-ab9f-1e2e4862bb1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef4bab3a-2a5a-4c0a-ab9f-1e2e4862bb1a\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.025775 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee3548b4-0553-4165-a8c1-59bd33cfa3a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee3548b4-0553-4165-a8c1-59bd33cfa3a5\") pod \"logging-loki-ingester-0\" (UID: \"f0d776a8-9060-4156-931f-fcbe335a8488\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.036275 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d9535b2-ab8e-4389-9e4c-c113d6e69419\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d9535b2-ab8e-4389-9e4c-c113d6e69419\") pod \"logging-loki-compactor-0\" (UID: \"e5f54572-957d-428e-9c13-0f45aa7dc6e5\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.090262 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5215c52d-dda2-4bf6-bf99-dffdcc73f289-config\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.090315 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.090348 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.090380 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.090414 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d061686d-aa09-47f9-b9df-14af57b63100\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d061686d-aa09-47f9-b9df-14af57b63100\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.090443 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.090458 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djhr2\" (UniqueName: \"kubernetes.io/projected/5215c52d-dda2-4bf6-bf99-dffdcc73f289-kube-api-access-djhr2\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.091548 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5215c52d-dda2-4bf6-bf99-dffdcc73f289-config\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.093523 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.097417 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.097475 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d061686d-aa09-47f9-b9df-14af57b63100\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d061686d-aa09-47f9-b9df-14af57b63100\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a8871a64e6fd7214290c56a87244865803519351adec9c75f1a6186e79e6fd86/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.097726 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.099820 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.100494 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5215c52d-dda2-4bf6-bf99-dffdcc73f289-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.127110 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djhr2\" (UniqueName: \"kubernetes.io/projected/5215c52d-dda2-4bf6-bf99-dffdcc73f289-kube-api-access-djhr2\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.128323 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d061686d-aa09-47f9-b9df-14af57b63100\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d061686d-aa09-47f9-b9df-14af57b63100\") pod \"logging-loki-index-gateway-0\" (UID: \"5215c52d-dda2-4bf6-bf99-dffdcc73f289\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.138071 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.216427 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.292340 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-595f794c55-vbzmt"] Feb 17 16:15:22 crc kubenswrapper[4874]: W0217 16:15:22.294745 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod641c0952_226b_4374_b247_f7e6a67f6cc8.slice/crio-7dcb0d791d66aa4f28b18195a4a702dc28556d64c3866f24bda5c6b677967e65 WatchSource:0}: Error finding container 7dcb0d791d66aa4f28b18195a4a702dc28556d64c3866f24bda5c6b677967e65: Status 404 returned error can't find the container with id 7dcb0d791d66aa4f28b18195a4a702dc28556d64c3866f24bda5c6b677967e65 Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.300137 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.352490 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-595f794c55-tvvjh"] Feb 17 16:15:22 crc kubenswrapper[4874]: W0217 16:15:22.360878 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac7d0ae_7505_401d_a9cc_49094832b8c7.slice/crio-3a0eb769979fcdc74489d80ad72ad2a92a75a9861919f3919caff1a5acc11bd6 WatchSource:0}: Error finding container 3a0eb769979fcdc74489d80ad72ad2a92a75a9861919f3919caff1a5acc11bd6: Status 404 returned error can't find the container with id 3a0eb769979fcdc74489d80ad72ad2a92a75a9861919f3919caff1a5acc11bd6 Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.418723 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 16:15:22 crc kubenswrapper[4874]: W0217 16:15:22.432356 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5f54572_957d_428e_9c13_0f45aa7dc6e5.slice/crio-00005cdc36bfa9aec2a94b1b7d7e546acfecde42b5612360210931d28a27154e WatchSource:0}: Error finding container 00005cdc36bfa9aec2a94b1b7d7e546acfecde42b5612360210931d28a27154e: Status 404 returned error can't find the container with id 00005cdc36bfa9aec2a94b1b7d7e546acfecde42b5612360210931d28a27154e Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.469781 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" event={"ID":"641c0952-226b-4374-b247-f7e6a67f6cc8","Type":"ContainerStarted","Data":"7dcb0d791d66aa4f28b18195a4a702dc28556d64c3866f24bda5c6b677967e65"} Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.469829 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" event={"ID":"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5","Type":"ContainerStarted","Data":"98e3512fe8c422262d2f48920016269f8294a9e901a14651580e27178acc196f"} Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.469844 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"e5f54572-957d-428e-9c13-0f45aa7dc6e5","Type":"ContainerStarted","Data":"00005cdc36bfa9aec2a94b1b7d7e546acfecde42b5612360210931d28a27154e"} Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.470237 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" event={"ID":"7ac7d0ae-7505-401d-a9cc-49094832b8c7","Type":"ContainerStarted","Data":"3a0eb769979fcdc74489d80ad72ad2a92a75a9861919f3919caff1a5acc11bd6"} Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.471670 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" event={"ID":"c2549768-f32d-4e6e-91f7-9ba31ddd5998","Type":"ContainerStarted","Data":"b6e9daba46d1c43e48dbafb74d10c70ebbc5815181b60ed7b0e2345593f40c9c"} Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.509650 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 16:15:22 crc kubenswrapper[4874]: W0217 16:15:22.531856 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5215c52d_dda2_4bf6_bf99_dffdcc73f289.slice/crio-ae0b069978ccdad1f6add385b86633f2cceb9f59beedb1d6d3b3c88146f32052 WatchSource:0}: Error finding container ae0b069978ccdad1f6add385b86633f2cceb9f59beedb1d6d3b3c88146f32052: Status 404 returned error can't find the container with id ae0b069978ccdad1f6add385b86633f2cceb9f59beedb1d6d3b3c88146f32052 Feb 17 16:15:22 crc kubenswrapper[4874]: I0217 16:15:22.578968 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 16:15:22 crc kubenswrapper[4874]: W0217 16:15:22.588591 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0d776a8_9060_4156_931f_fcbe335a8488.slice/crio-2be894bf7c1a4d0c1dc69b65c7b90f6e0cee3659db60f82c170af9968b0991aa WatchSource:0}: Error finding container 2be894bf7c1a4d0c1dc69b65c7b90f6e0cee3659db60f82c170af9968b0991aa: Status 404 returned error can't find the container with id 2be894bf7c1a4d0c1dc69b65c7b90f6e0cee3659db60f82c170af9968b0991aa Feb 17 16:15:23 crc kubenswrapper[4874]: I0217 16:15:23.483339 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f0d776a8-9060-4156-931f-fcbe335a8488","Type":"ContainerStarted","Data":"2be894bf7c1a4d0c1dc69b65c7b90f6e0cee3659db60f82c170af9968b0991aa"} Feb 17 16:15:23 crc kubenswrapper[4874]: I0217 16:15:23.485187 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"5215c52d-dda2-4bf6-bf99-dffdcc73f289","Type":"ContainerStarted","Data":"ae0b069978ccdad1f6add385b86633f2cceb9f59beedb1d6d3b3c88146f32052"} Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.508821 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" event={"ID":"d60c9d45-c4f3-4702-a479-c98e249e2eb4","Type":"ContainerStarted","Data":"f8f2642ef032fc4b07bae1cac1ab040b0dff77655ce2bf672b4fdd939fdf279b"} Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.509324 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.510525 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"e5f54572-957d-428e-9c13-0f45aa7dc6e5","Type":"ContainerStarted","Data":"b242d6db23866071b65ff9ec56ef3b6cc7fe097747ebee06731f57432eaccf94"} Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.510690 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.512222 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" event={"ID":"7ac7d0ae-7505-401d-a9cc-49094832b8c7","Type":"ContainerStarted","Data":"4189759309a998f0bcfd5a3f96ac708e457f7f874d152f0de96d949fb9eb136f"} Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.513920 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"5215c52d-dda2-4bf6-bf99-dffdcc73f289","Type":"ContainerStarted","Data":"31d7c02910ab8faf768eca74d1d9ba99ad22011c35d22f9ef279cd4b875d69ef"} Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.514066 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.515360 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" event={"ID":"c2549768-f32d-4e6e-91f7-9ba31ddd5998","Type":"ContainerStarted","Data":"5e57444fa79e2baf4b4417920a7115e2a54c3b0ef5b331916fc0ee967ddec9c5"} Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.516043 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.517244 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" event={"ID":"641c0952-226b-4374-b247-f7e6a67f6cc8","Type":"ContainerStarted","Data":"9161db9b61d543a363f9120f7fe89f953fb592053e60ddcfdba4ddc0dc497ce1"} Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.518709 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" event={"ID":"bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5","Type":"ContainerStarted","Data":"3ffc338524aad7e9d3209836cbc1c46ec062ef9d34fc981ff2a84ea0a647e9d7"} Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.518829 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.520024 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f0d776a8-9060-4156-931f-fcbe335a8488","Type":"ContainerStarted","Data":"2a6832056d5cc4e385c53ddd03857a810f667d6b1014d9253293eb0a45fc9be0"} Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.520220 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.563804 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" podStartSLOduration=2.972967191 podStartE2EDuration="6.563779869s" podCreationTimestamp="2026-02-17 16:15:20 +0000 UTC" firstStartedPulling="2026-02-17 16:15:21.698253656 +0000 UTC m=+731.992642217" lastFinishedPulling="2026-02-17 16:15:25.289066324 +0000 UTC m=+735.583454895" observedRunningTime="2026-02-17 16:15:26.561202254 +0000 UTC m=+736.855590875" watchObservedRunningTime="2026-02-17 16:15:26.563779869 +0000 UTC m=+736.858168470" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.568937 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" podStartSLOduration=2.532379142 podStartE2EDuration="6.568921958s" podCreationTimestamp="2026-02-17 16:15:20 +0000 UTC" firstStartedPulling="2026-02-17 16:15:21.238292811 +0000 UTC m=+731.532681382" lastFinishedPulling="2026-02-17 16:15:25.274835627 +0000 UTC m=+735.569224198" observedRunningTime="2026-02-17 16:15:26.53913954 +0000 UTC m=+736.833528141" watchObservedRunningTime="2026-02-17 16:15:26.568921958 +0000 UTC m=+736.863310549" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.612229 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" podStartSLOduration=2.886027059 podStartE2EDuration="6.612209074s" podCreationTimestamp="2026-02-17 16:15:20 +0000 UTC" firstStartedPulling="2026-02-17 16:15:21.559039932 +0000 UTC m=+731.853428493" lastFinishedPulling="2026-02-17 16:15:25.285221937 +0000 UTC m=+735.579610508" observedRunningTime="2026-02-17 16:15:26.594960811 +0000 UTC m=+736.889349382" watchObservedRunningTime="2026-02-17 16:15:26.612209074 +0000 UTC m=+736.906597665" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.647316 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.6771302649999997 podStartE2EDuration="6.647298585s" podCreationTimestamp="2026-02-17 16:15:20 +0000 UTC" firstStartedPulling="2026-02-17 16:15:22.43866199 +0000 UTC m=+732.733050551" lastFinishedPulling="2026-02-17 16:15:25.40883031 +0000 UTC m=+735.703218871" observedRunningTime="2026-02-17 16:15:26.642999507 +0000 UTC m=+736.937388068" watchObservedRunningTime="2026-02-17 16:15:26.647298585 +0000 UTC m=+736.941687146" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.648696 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.963744659 podStartE2EDuration="6.64868924s" podCreationTimestamp="2026-02-17 16:15:20 +0000 UTC" firstStartedPulling="2026-02-17 16:15:22.533629984 +0000 UTC m=+732.828018545" lastFinishedPulling="2026-02-17 16:15:25.218574565 +0000 UTC m=+735.512963126" observedRunningTime="2026-02-17 16:15:26.618844411 +0000 UTC m=+736.913232972" watchObservedRunningTime="2026-02-17 16:15:26.64868924 +0000 UTC m=+736.943077801" Feb 17 16:15:26 crc kubenswrapper[4874]: I0217 16:15:26.661861 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.979438993 podStartE2EDuration="6.66184565s" podCreationTimestamp="2026-02-17 16:15:20 +0000 UTC" firstStartedPulling="2026-02-17 16:15:22.592513052 +0000 UTC m=+732.886901613" lastFinishedPulling="2026-02-17 16:15:25.274919699 +0000 UTC m=+735.569308270" observedRunningTime="2026-02-17 16:15:26.659378388 +0000 UTC m=+736.953766949" watchObservedRunningTime="2026-02-17 16:15:26.66184565 +0000 UTC m=+736.956234211" Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.552568 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" event={"ID":"641c0952-226b-4374-b247-f7e6a67f6cc8","Type":"ContainerStarted","Data":"5a0e68ce5bb454a57f2a868dc0ad111d8d2f1c3e16f4aaeff0c085f843d385cf"} Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.553293 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.553331 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.559746 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" event={"ID":"7ac7d0ae-7505-401d-a9cc-49094832b8c7","Type":"ContainerStarted","Data":"2de4b887e9474ead758824c9d1ca32255e3ad50dcd30c255bc62f259e518277d"} Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.560107 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.573605 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.574256 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.576518 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.605150 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-595f794c55-vbzmt" podStartSLOduration=3.4116844029999998 podStartE2EDuration="8.605116285s" podCreationTimestamp="2026-02-17 16:15:20 +0000 UTC" firstStartedPulling="2026-02-17 16:15:22.303288323 +0000 UTC m=+732.597676884" lastFinishedPulling="2026-02-17 16:15:27.496720205 +0000 UTC m=+737.791108766" observedRunningTime="2026-02-17 16:15:28.590897338 +0000 UTC m=+738.885285989" watchObservedRunningTime="2026-02-17 16:15:28.605116285 +0000 UTC m=+738.899504856" Feb 17 16:15:28 crc kubenswrapper[4874]: I0217 16:15:28.628887 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" podStartSLOduration=3.497146507 podStartE2EDuration="8.628862881s" podCreationTimestamp="2026-02-17 16:15:20 +0000 UTC" firstStartedPulling="2026-02-17 16:15:22.368325765 +0000 UTC m=+732.662714326" lastFinishedPulling="2026-02-17 16:15:27.500042139 +0000 UTC m=+737.794430700" observedRunningTime="2026-02-17 16:15:28.622273476 +0000 UTC m=+738.916662087" watchObservedRunningTime="2026-02-17 16:15:28.628862881 +0000 UTC m=+738.923251482" Feb 17 16:15:29 crc kubenswrapper[4874]: I0217 16:15:29.567552 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:29 crc kubenswrapper[4874]: I0217 16:15:29.581132 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-595f794c55-tvvjh" Feb 17 16:15:40 crc kubenswrapper[4874]: I0217 16:15:40.857952 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-b69gh" Feb 17 16:15:41 crc kubenswrapper[4874]: I0217 16:15:41.046379 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-qkpmn" Feb 17 16:15:41 crc kubenswrapper[4874]: I0217 16:15:41.258283 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-2p4zr" Feb 17 16:15:42 crc kubenswrapper[4874]: I0217 16:15:42.147788 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:15:42 crc kubenswrapper[4874]: I0217 16:15:42.230506 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:15:42 crc kubenswrapper[4874]: I0217 16:15:42.309911 4874 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 17 16:15:42 crc kubenswrapper[4874]: I0217 16:15:42.310027 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0d776a8-9060-4156-931f-fcbe335a8488" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:15:43 crc kubenswrapper[4874]: I0217 16:15:43.776981 4874 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:15:52 crc kubenswrapper[4874]: I0217 16:15:52.309700 4874 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 17 16:15:52 crc kubenswrapper[4874]: I0217 16:15:52.310525 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0d776a8-9060-4156-931f-fcbe335a8488" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:15:57 crc kubenswrapper[4874]: I0217 16:15:57.725069 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:15:57 crc kubenswrapper[4874]: I0217 16:15:57.725556 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:02 crc kubenswrapper[4874]: I0217 16:16:02.309178 4874 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 17 16:16:02 crc kubenswrapper[4874]: I0217 16:16:02.309948 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0d776a8-9060-4156-931f-fcbe335a8488" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:16:12 crc kubenswrapper[4874]: I0217 16:16:12.309342 4874 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 17 16:16:12 crc kubenswrapper[4874]: I0217 16:16:12.311259 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f0d776a8-9060-4156-931f-fcbe335a8488" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:16:22 crc kubenswrapper[4874]: I0217 16:16:22.314469 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:16:27 crc kubenswrapper[4874]: I0217 16:16:27.724732 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:16:27 crc kubenswrapper[4874]: I0217 16:16:27.725108 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.031690 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ng4hd"] Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.035499 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.042196 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.042767 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.042837 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.042879 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.043290 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qzwcz" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.047582 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.063557 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ng4hd"] Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.112463 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-ng4hd"] Feb 17 16:16:39 crc kubenswrapper[4874]: E0217 16:16:39.116619 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-zntwt metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-zntwt sa-token trusted-ca]: context canceled" pod="openshift-logging/collector-ng4hd" podUID="3b9490d6-64f8-4bed-a855-24ba10002917" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.117420 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntwt\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-kube-api-access-zntwt\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.117448 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-trusted-ca\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.117467 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-entrypoint\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.117486 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-metrics\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.117740 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-syslog-receiver\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.117825 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b9490d6-64f8-4bed-a855-24ba10002917-tmp\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.117879 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-token\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.117960 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3b9490d6-64f8-4bed-a855-24ba10002917-datadir\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.117991 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-sa-token\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.118015 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config-openshift-service-cacrt\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.118050 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.219983 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b9490d6-64f8-4bed-a855-24ba10002917-tmp\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220104 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-token\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220223 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3b9490d6-64f8-4bed-a855-24ba10002917-datadir\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220284 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-sa-token\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220325 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config-openshift-service-cacrt\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220379 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220418 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zntwt\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-kube-api-access-zntwt\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220450 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-trusted-ca\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220483 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-entrypoint\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220520 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-metrics\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.220598 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-syslog-receiver\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: E0217 16:16:39.220773 4874 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 17 16:16:39 crc kubenswrapper[4874]: E0217 16:16:39.220852 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-syslog-receiver podName:3b9490d6-64f8-4bed-a855-24ba10002917 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:39.720827376 +0000 UTC m=+810.015215957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-syslog-receiver") pod "collector-ng4hd" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917") : secret "collector-syslog-receiver" not found Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.221227 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3b9490d6-64f8-4bed-a855-24ba10002917-datadir\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: E0217 16:16:39.221253 4874 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.221868 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.221989 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-entrypoint\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: E0217 16:16:39.222089 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-metrics podName:3b9490d6-64f8-4bed-a855-24ba10002917 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:39.721325259 +0000 UTC m=+810.015713820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-metrics") pod "collector-ng4hd" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917") : secret "collector-metrics" not found Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.222146 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config-openshift-service-cacrt\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.222955 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-trusted-ca\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.231433 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b9490d6-64f8-4bed-a855-24ba10002917-tmp\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.231624 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-token\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.248638 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntwt\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-kube-api-access-zntwt\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.254406 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-sa-token\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.255650 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.293840 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.321526 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zntwt\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-kube-api-access-zntwt\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.321596 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b9490d6-64f8-4bed-a855-24ba10002917-tmp\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.321626 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3b9490d6-64f8-4bed-a855-24ba10002917-datadir\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.321688 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-sa-token\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.321743 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config-openshift-service-cacrt\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.321798 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-trusted-ca\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.321867 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-token\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.321912 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-entrypoint\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.321941 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.322113 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b9490d6-64f8-4bed-a855-24ba10002917-datadir" (OuterVolumeSpecName: "datadir") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.322316 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.322810 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.322821 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config" (OuterVolumeSpecName: "config") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.322963 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.324665 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-kube-api-access-zntwt" (OuterVolumeSpecName: "kube-api-access-zntwt") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "kube-api-access-zntwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.327898 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b9490d6-64f8-4bed-a855-24ba10002917-tmp" (OuterVolumeSpecName: "tmp") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.327936 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-sa-token" (OuterVolumeSpecName: "sa-token") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.329760 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-token" (OuterVolumeSpecName: "collector-token") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.424464 4874 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.424512 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.424532 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zntwt\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-kube-api-access-zntwt\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.424552 4874 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3b9490d6-64f8-4bed-a855-24ba10002917-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.424572 4874 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3b9490d6-64f8-4bed-a855-24ba10002917-datadir\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.424591 4874 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3b9490d6-64f8-4bed-a855-24ba10002917-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.424611 4874 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.424630 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b9490d6-64f8-4bed-a855-24ba10002917-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.424648 4874 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.727957 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-metrics\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.728104 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-syslog-receiver\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.731253 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-metrics\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.731355 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-syslog-receiver\") pod \"collector-ng4hd\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " pod="openshift-logging/collector-ng4hd" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.930680 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-metrics\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.930832 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-syslog-receiver\") pod \"3b9490d6-64f8-4bed-a855-24ba10002917\" (UID: \"3b9490d6-64f8-4bed-a855-24ba10002917\") " Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.933881 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-metrics" (OuterVolumeSpecName: "metrics") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4874]: I0217 16:16:39.935546 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "3b9490d6-64f8-4bed-a855-24ba10002917" (UID: "3b9490d6-64f8-4bed-a855-24ba10002917"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.033179 4874 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.033226 4874 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3b9490d6-64f8-4bed-a855-24ba10002917-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.268185 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ng4hd" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.359378 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-ng4hd"] Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.365850 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-ng4hd"] Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.377545 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-qfbx5"] Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.379279 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.382240 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.384174 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.385208 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.386232 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.386399 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qzwcz" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.388437 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-qfbx5"] Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.391828 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.465949 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b9490d6-64f8-4bed-a855-24ba10002917" path="/var/lib/kubelet/pods/3b9490d6-64f8-4bed-a855-24ba10002917/volumes" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.545262 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-trusted-ca\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.545306 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/57b97733-9959-41a8-b1bc-a8dae79c1892-datadir\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.545492 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/57b97733-9959-41a8-b1bc-a8dae79c1892-sa-token\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.545597 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-config-openshift-service-cacrt\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.545743 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-config\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.545797 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/57b97733-9959-41a8-b1bc-a8dae79c1892-tmp\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.545879 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjw95\" (UniqueName: \"kubernetes.io/projected/57b97733-9959-41a8-b1bc-a8dae79c1892-kube-api-access-hjw95\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.545974 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-entrypoint\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.546021 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/57b97733-9959-41a8-b1bc-a8dae79c1892-collector-token\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.546154 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/57b97733-9959-41a8-b1bc-a8dae79c1892-metrics\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.546200 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/57b97733-9959-41a8-b1bc-a8dae79c1892-collector-syslog-receiver\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.651737 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-trusted-ca\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652156 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-trusted-ca\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652304 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/57b97733-9959-41a8-b1bc-a8dae79c1892-datadir\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652425 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/57b97733-9959-41a8-b1bc-a8dae79c1892-sa-token\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652441 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/57b97733-9959-41a8-b1bc-a8dae79c1892-datadir\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652483 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-config-openshift-service-cacrt\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652616 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-config\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/57b97733-9959-41a8-b1bc-a8dae79c1892-tmp\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652769 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjw95\" (UniqueName: \"kubernetes.io/projected/57b97733-9959-41a8-b1bc-a8dae79c1892-kube-api-access-hjw95\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652887 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-entrypoint\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.652929 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/57b97733-9959-41a8-b1bc-a8dae79c1892-collector-token\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.653036 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/57b97733-9959-41a8-b1bc-a8dae79c1892-metrics\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.653117 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/57b97733-9959-41a8-b1bc-a8dae79c1892-collector-syslog-receiver\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.653485 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-config-openshift-service-cacrt\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.653708 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-config\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.654436 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/57b97733-9959-41a8-b1bc-a8dae79c1892-entrypoint\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.661182 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/57b97733-9959-41a8-b1bc-a8dae79c1892-tmp\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.661236 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/57b97733-9959-41a8-b1bc-a8dae79c1892-collector-syslog-receiver\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.661293 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/57b97733-9959-41a8-b1bc-a8dae79c1892-metrics\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.665629 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/57b97733-9959-41a8-b1bc-a8dae79c1892-collector-token\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.700964 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/57b97733-9959-41a8-b1bc-a8dae79c1892-sa-token\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.701018 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjw95\" (UniqueName: \"kubernetes.io/projected/57b97733-9959-41a8-b1bc-a8dae79c1892-kube-api-access-hjw95\") pod \"collector-qfbx5\" (UID: \"57b97733-9959-41a8-b1bc-a8dae79c1892\") " pod="openshift-logging/collector-qfbx5" Feb 17 16:16:40 crc kubenswrapper[4874]: I0217 16:16:40.703765 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-qfbx5" Feb 17 16:16:41 crc kubenswrapper[4874]: I0217 16:16:41.200980 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-qfbx5"] Feb 17 16:16:41 crc kubenswrapper[4874]: I0217 16:16:41.277806 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-qfbx5" event={"ID":"57b97733-9959-41a8-b1bc-a8dae79c1892","Type":"ContainerStarted","Data":"1eb0b98a6ab073b8c1218f897092824eaf4f6097917d1b9303fdd58dab8ef5a6"} Feb 17 16:16:48 crc kubenswrapper[4874]: I0217 16:16:48.342450 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-qfbx5" event={"ID":"57b97733-9959-41a8-b1bc-a8dae79c1892","Type":"ContainerStarted","Data":"c6d8b06cc542c744994ae5df6e87cb8d3ab3d9f4ffee7685f07eff1e78225d7a"} Feb 17 16:16:48 crc kubenswrapper[4874]: I0217 16:16:48.384606 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-qfbx5" podStartSLOduration=1.8552462680000001 podStartE2EDuration="8.384582944s" podCreationTimestamp="2026-02-17 16:16:40 +0000 UTC" firstStartedPulling="2026-02-17 16:16:41.205038176 +0000 UTC m=+811.499426767" lastFinishedPulling="2026-02-17 16:16:47.734374882 +0000 UTC m=+818.028763443" observedRunningTime="2026-02-17 16:16:48.377593249 +0000 UTC m=+818.671981840" watchObservedRunningTime="2026-02-17 16:16:48.384582944 +0000 UTC m=+818.678971515" Feb 17 16:16:57 crc kubenswrapper[4874]: I0217 16:16:57.724560 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:16:57 crc kubenswrapper[4874]: I0217 16:16:57.725201 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:57 crc kubenswrapper[4874]: I0217 16:16:57.725285 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:16:57 crc kubenswrapper[4874]: I0217 16:16:57.726186 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5054786a168a12e52b8d968ed3ece839ff7c3185d6be0ffc79a31e785b1ebbdf"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:16:57 crc kubenswrapper[4874]: I0217 16:16:57.726278 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://5054786a168a12e52b8d968ed3ece839ff7c3185d6be0ffc79a31e785b1ebbdf" gracePeriod=600 Feb 17 16:16:58 crc kubenswrapper[4874]: I0217 16:16:58.434294 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="5054786a168a12e52b8d968ed3ece839ff7c3185d6be0ffc79a31e785b1ebbdf" exitCode=0 Feb 17 16:16:58 crc kubenswrapper[4874]: I0217 16:16:58.434410 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"5054786a168a12e52b8d968ed3ece839ff7c3185d6be0ffc79a31e785b1ebbdf"} Feb 17 16:16:58 crc kubenswrapper[4874]: I0217 16:16:58.434988 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"14d9fa4df39b7f49ac05cd101e3f5c3bf6c474d638afdc060d235e7fd1103377"} Feb 17 16:16:58 crc kubenswrapper[4874]: I0217 16:16:58.435017 4874 scope.go:117] "RemoveContainer" containerID="5c051c0004b244e4c0ce127d058c5599bf72e06e8786ebe01293d1051eeff494" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.246010 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm"] Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.248257 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.257550 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm"] Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.258817 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.310068 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.310305 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d9gz\" (UniqueName: \"kubernetes.io/projected/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-kube-api-access-2d9gz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.310417 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.411466 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.411631 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d9gz\" (UniqueName: \"kubernetes.io/projected/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-kube-api-access-2d9gz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.411672 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.412055 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.412132 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.434157 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d9gz\" (UniqueName: \"kubernetes.io/projected/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-kube-api-access-2d9gz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:18 crc kubenswrapper[4874]: I0217 16:17:18.569845 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:19 crc kubenswrapper[4874]: I0217 16:17:19.107468 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm"] Feb 17 16:17:19 crc kubenswrapper[4874]: W0217 16:17:19.111295 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod614d03d4_1cdd_46f3_99fa_c6e4ec0bc851.slice/crio-d9d5b90bb40c6e657ddf91d41df7d30d5018df8751eb9cd4dc5c64e513f2f8ee WatchSource:0}: Error finding container d9d5b90bb40c6e657ddf91d41df7d30d5018df8751eb9cd4dc5c64e513f2f8ee: Status 404 returned error can't find the container with id d9d5b90bb40c6e657ddf91d41df7d30d5018df8751eb9cd4dc5c64e513f2f8ee Feb 17 16:17:19 crc kubenswrapper[4874]: I0217 16:17:19.626391 4874 generic.go:334] "Generic (PLEG): container finished" podID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerID="7441d966223e1f600d31b79ebe30f2c24ae4ae577f73d03d0ef0fca5dec4476d" exitCode=0 Feb 17 16:17:19 crc kubenswrapper[4874]: I0217 16:17:19.626445 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" event={"ID":"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851","Type":"ContainerDied","Data":"7441d966223e1f600d31b79ebe30f2c24ae4ae577f73d03d0ef0fca5dec4476d"} Feb 17 16:17:19 crc kubenswrapper[4874]: I0217 16:17:19.626474 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" event={"ID":"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851","Type":"ContainerStarted","Data":"d9d5b90bb40c6e657ddf91d41df7d30d5018df8751eb9cd4dc5c64e513f2f8ee"} Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.603969 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jcxqv"] Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.605337 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.625618 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jcxqv"] Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.752199 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-utilities\") pod \"redhat-operators-jcxqv\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.752273 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kktb\" (UniqueName: \"kubernetes.io/projected/59de09e8-8e33-4a5d-b243-7a749402cef1-kube-api-access-2kktb\") pod \"redhat-operators-jcxqv\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.752485 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-catalog-content\") pod \"redhat-operators-jcxqv\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.853815 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-utilities\") pod \"redhat-operators-jcxqv\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.853883 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kktb\" (UniqueName: \"kubernetes.io/projected/59de09e8-8e33-4a5d-b243-7a749402cef1-kube-api-access-2kktb\") pod \"redhat-operators-jcxqv\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.854000 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-catalog-content\") pod \"redhat-operators-jcxqv\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.854268 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-utilities\") pod \"redhat-operators-jcxqv\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.854401 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-catalog-content\") pod \"redhat-operators-jcxqv\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.875329 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kktb\" (UniqueName: \"kubernetes.io/projected/59de09e8-8e33-4a5d-b243-7a749402cef1-kube-api-access-2kktb\") pod \"redhat-operators-jcxqv\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:20 crc kubenswrapper[4874]: I0217 16:17:20.931673 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:21 crc kubenswrapper[4874]: I0217 16:17:21.374576 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jcxqv"] Feb 17 16:17:21 crc kubenswrapper[4874]: W0217 16:17:21.380485 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59de09e8_8e33_4a5d_b243_7a749402cef1.slice/crio-bb6d47923442fbc9f08b0ebb2e29464de76477623e860f93d4ae8c529fb222b1 WatchSource:0}: Error finding container bb6d47923442fbc9f08b0ebb2e29464de76477623e860f93d4ae8c529fb222b1: Status 404 returned error can't find the container with id bb6d47923442fbc9f08b0ebb2e29464de76477623e860f93d4ae8c529fb222b1 Feb 17 16:17:21 crc kubenswrapper[4874]: I0217 16:17:21.640239 4874 generic.go:334] "Generic (PLEG): container finished" podID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerID="d47b0d0dd1fd74bbe20638536b713569bc2d33c78e6b4c1dc16868d3f5475334" exitCode=0 Feb 17 16:17:21 crc kubenswrapper[4874]: I0217 16:17:21.640311 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" event={"ID":"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851","Type":"ContainerDied","Data":"d47b0d0dd1fd74bbe20638536b713569bc2d33c78e6b4c1dc16868d3f5475334"} Feb 17 16:17:21 crc kubenswrapper[4874]: I0217 16:17:21.643206 4874 generic.go:334] "Generic (PLEG): container finished" podID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerID="5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33" exitCode=0 Feb 17 16:17:21 crc kubenswrapper[4874]: I0217 16:17:21.643232 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcxqv" event={"ID":"59de09e8-8e33-4a5d-b243-7a749402cef1","Type":"ContainerDied","Data":"5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33"} Feb 17 16:17:21 crc kubenswrapper[4874]: I0217 16:17:21.643250 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcxqv" event={"ID":"59de09e8-8e33-4a5d-b243-7a749402cef1","Type":"ContainerStarted","Data":"bb6d47923442fbc9f08b0ebb2e29464de76477623e860f93d4ae8c529fb222b1"} Feb 17 16:17:22 crc kubenswrapper[4874]: I0217 16:17:22.652796 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcxqv" event={"ID":"59de09e8-8e33-4a5d-b243-7a749402cef1","Type":"ContainerStarted","Data":"351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557"} Feb 17 16:17:22 crc kubenswrapper[4874]: I0217 16:17:22.663140 4874 generic.go:334] "Generic (PLEG): container finished" podID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerID="d9a36f0a5304ee1be8f6338d1789369dea68a12607c4a5ba23040b5b8e07c347" exitCode=0 Feb 17 16:17:22 crc kubenswrapper[4874]: I0217 16:17:22.663191 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" event={"ID":"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851","Type":"ContainerDied","Data":"d9a36f0a5304ee1be8f6338d1789369dea68a12607c4a5ba23040b5b8e07c347"} Feb 17 16:17:23 crc kubenswrapper[4874]: I0217 16:17:23.671834 4874 generic.go:334] "Generic (PLEG): container finished" podID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerID="351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557" exitCode=0 Feb 17 16:17:23 crc kubenswrapper[4874]: I0217 16:17:23.671896 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcxqv" event={"ID":"59de09e8-8e33-4a5d-b243-7a749402cef1","Type":"ContainerDied","Data":"351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557"} Feb 17 16:17:23 crc kubenswrapper[4874]: I0217 16:17:23.968035 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.105227 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-bundle\") pod \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.105390 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-util\") pod \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.105508 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d9gz\" (UniqueName: \"kubernetes.io/projected/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-kube-api-access-2d9gz\") pod \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\" (UID: \"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851\") " Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.108027 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-bundle" (OuterVolumeSpecName: "bundle") pod "614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" (UID: "614d03d4-1cdd-46f3-99fa-c6e4ec0bc851"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.114864 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-kube-api-access-2d9gz" (OuterVolumeSpecName: "kube-api-access-2d9gz") pod "614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" (UID: "614d03d4-1cdd-46f3-99fa-c6e4ec0bc851"). InnerVolumeSpecName "kube-api-access-2d9gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.135380 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-util" (OuterVolumeSpecName: "util") pod "614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" (UID: "614d03d4-1cdd-46f3-99fa-c6e4ec0bc851"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.206681 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.206712 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.206721 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d9gz\" (UniqueName: \"kubernetes.io/projected/614d03d4-1cdd-46f3-99fa-c6e4ec0bc851-kube-api-access-2d9gz\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.679390 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" event={"ID":"614d03d4-1cdd-46f3-99fa-c6e4ec0bc851","Type":"ContainerDied","Data":"d9d5b90bb40c6e657ddf91d41df7d30d5018df8751eb9cd4dc5c64e513f2f8ee"} Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.679426 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d5b90bb40c6e657ddf91d41df7d30d5018df8751eb9cd4dc5c64e513f2f8ee" Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.679481 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm" Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.683858 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcxqv" event={"ID":"59de09e8-8e33-4a5d-b243-7a749402cef1","Type":"ContainerStarted","Data":"6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158"} Feb 17 16:17:24 crc kubenswrapper[4874]: I0217 16:17:24.708177 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jcxqv" podStartSLOduration=2.2671732430000002 podStartE2EDuration="4.708158845s" podCreationTimestamp="2026-02-17 16:17:20 +0000 UTC" firstStartedPulling="2026-02-17 16:17:21.644055532 +0000 UTC m=+851.938444093" lastFinishedPulling="2026-02-17 16:17:24.085041134 +0000 UTC m=+854.379429695" observedRunningTime="2026-02-17 16:17:24.705474679 +0000 UTC m=+854.999863250" watchObservedRunningTime="2026-02-17 16:17:24.708158845 +0000 UTC m=+855.002547426" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.784488 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-92lgl"] Feb 17 16:17:27 crc kubenswrapper[4874]: E0217 16:17:27.785129 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerName="pull" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.785147 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerName="pull" Feb 17 16:17:27 crc kubenswrapper[4874]: E0217 16:17:27.785182 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerName="extract" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.785191 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerName="extract" Feb 17 16:17:27 crc kubenswrapper[4874]: E0217 16:17:27.785209 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerName="util" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.785217 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerName="util" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.785370 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="614d03d4-1cdd-46f3-99fa-c6e4ec0bc851" containerName="extract" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.786024 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-92lgl" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.789242 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.789350 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fkmss" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.797525 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-92lgl"] Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.798711 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.862892 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88nrq\" (UniqueName: \"kubernetes.io/projected/16090473-6fc6-45cd-a577-ed241b1e7c60-kube-api-access-88nrq\") pod \"nmstate-operator-694c9596b7-92lgl\" (UID: \"16090473-6fc6-45cd-a577-ed241b1e7c60\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-92lgl" Feb 17 16:17:27 crc kubenswrapper[4874]: I0217 16:17:27.964748 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88nrq\" (UniqueName: \"kubernetes.io/projected/16090473-6fc6-45cd-a577-ed241b1e7c60-kube-api-access-88nrq\") pod \"nmstate-operator-694c9596b7-92lgl\" (UID: \"16090473-6fc6-45cd-a577-ed241b1e7c60\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-92lgl" Feb 17 16:17:28 crc kubenswrapper[4874]: I0217 16:17:28.005994 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88nrq\" (UniqueName: \"kubernetes.io/projected/16090473-6fc6-45cd-a577-ed241b1e7c60-kube-api-access-88nrq\") pod \"nmstate-operator-694c9596b7-92lgl\" (UID: \"16090473-6fc6-45cd-a577-ed241b1e7c60\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-92lgl" Feb 17 16:17:28 crc kubenswrapper[4874]: I0217 16:17:28.107809 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-92lgl" Feb 17 16:17:28 crc kubenswrapper[4874]: W0217 16:17:28.571769 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16090473_6fc6_45cd_a577_ed241b1e7c60.slice/crio-b553977712e0e66ae022679b484c057e859d345df5a0276c47cf5f55528e2989 WatchSource:0}: Error finding container b553977712e0e66ae022679b484c057e859d345df5a0276c47cf5f55528e2989: Status 404 returned error can't find the container with id b553977712e0e66ae022679b484c057e859d345df5a0276c47cf5f55528e2989 Feb 17 16:17:28 crc kubenswrapper[4874]: I0217 16:17:28.572266 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-92lgl"] Feb 17 16:17:28 crc kubenswrapper[4874]: I0217 16:17:28.709527 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-92lgl" event={"ID":"16090473-6fc6-45cd-a577-ed241b1e7c60","Type":"ContainerStarted","Data":"b553977712e0e66ae022679b484c057e859d345df5a0276c47cf5f55528e2989"} Feb 17 16:17:30 crc kubenswrapper[4874]: I0217 16:17:30.932753 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:30 crc kubenswrapper[4874]: I0217 16:17:30.932994 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:31 crc kubenswrapper[4874]: I0217 16:17:31.015002 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:31 crc kubenswrapper[4874]: I0217 16:17:31.787825 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:32 crc kubenswrapper[4874]: I0217 16:17:32.736058 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-92lgl" event={"ID":"16090473-6fc6-45cd-a577-ed241b1e7c60","Type":"ContainerStarted","Data":"6d986218d832613efefa10fe715533a0aa30102d15ec969df181a263878ebb88"} Feb 17 16:17:32 crc kubenswrapper[4874]: I0217 16:17:32.766724 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-92lgl" podStartSLOduration=2.576000241 podStartE2EDuration="5.766681851s" podCreationTimestamp="2026-02-17 16:17:27 +0000 UTC" firstStartedPulling="2026-02-17 16:17:28.575096648 +0000 UTC m=+858.869485209" lastFinishedPulling="2026-02-17 16:17:31.765778248 +0000 UTC m=+862.060166819" observedRunningTime="2026-02-17 16:17:32.761466871 +0000 UTC m=+863.055855432" watchObservedRunningTime="2026-02-17 16:17:32.766681851 +0000 UTC m=+863.061070412" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.396002 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jcxqv"] Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.781348 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx"] Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.798543 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.803486 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pxwvt" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.813024 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl"] Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.813973 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.815486 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.834169 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx"] Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.841889 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-njd2b"] Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.842775 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.851501 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl"] Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.921809 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4"] Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.925191 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.928475 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.928650 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ghwxq" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.928693 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.935662 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4"] Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.955590 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/098dd26d-2e61-473f-bbe8-47be863f5b45-ovs-socket\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.955757 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fltv7\" (UniqueName: \"kubernetes.io/projected/1b31ad9f-374d-495a-85a8-161930a8dc23-kube-api-access-fltv7\") pod \"nmstate-webhook-866bcb46dc-kq2cl\" (UID: \"1b31ad9f-374d-495a-85a8-161930a8dc23\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.955799 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1b31ad9f-374d-495a-85a8-161930a8dc23-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-kq2cl\" (UID: \"1b31ad9f-374d-495a-85a8-161930a8dc23\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.955895 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvk4\" (UniqueName: \"kubernetes.io/projected/098dd26d-2e61-473f-bbe8-47be863f5b45-kube-api-access-9xvk4\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.955937 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/098dd26d-2e61-473f-bbe8-47be863f5b45-nmstate-lock\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.955971 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/098dd26d-2e61-473f-bbe8-47be863f5b45-dbus-socket\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:33 crc kubenswrapper[4874]: I0217 16:17:33.956033 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrcf\" (UniqueName: \"kubernetes.io/projected/1dd205b6-4b48-4e5c-8731-d4322d8eba49-kube-api-access-fsrcf\") pod \"nmstate-metrics-58c85c668d-gnnhx\" (UID: \"1dd205b6-4b48-4e5c-8731-d4322d8eba49\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059201 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1b31ad9f-374d-495a-85a8-161930a8dc23-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-kq2cl\" (UID: \"1b31ad9f-374d-495a-85a8-161930a8dc23\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059272 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvk4\" (UniqueName: \"kubernetes.io/projected/098dd26d-2e61-473f-bbe8-47be863f5b45-kube-api-access-9xvk4\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059310 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/098dd26d-2e61-473f-bbe8-47be863f5b45-nmstate-lock\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059341 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/098dd26d-2e61-473f-bbe8-47be863f5b45-dbus-socket\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059385 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1c6543ed-090e-4099-931a-d82e47304681-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-p7nj4\" (UID: \"1c6543ed-090e-4099-931a-d82e47304681\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059417 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsrcf\" (UniqueName: \"kubernetes.io/projected/1dd205b6-4b48-4e5c-8731-d4322d8eba49-kube-api-access-fsrcf\") pod \"nmstate-metrics-58c85c668d-gnnhx\" (UID: \"1dd205b6-4b48-4e5c-8731-d4322d8eba49\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx" Feb 17 16:17:34 crc kubenswrapper[4874]: E0217 16:17:34.059435 4874 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059445 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/098dd26d-2e61-473f-bbe8-47be863f5b45-nmstate-lock\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059550 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/098dd26d-2e61-473f-bbe8-47be863f5b45-ovs-socket\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:34 crc kubenswrapper[4874]: E0217 16:17:34.059561 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b31ad9f-374d-495a-85a8-161930a8dc23-tls-key-pair podName:1b31ad9f-374d-495a-85a8-161930a8dc23 nodeName:}" failed. No retries permitted until 2026-02-17 16:17:34.559540871 +0000 UTC m=+864.853929432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/1b31ad9f-374d-495a-85a8-161930a8dc23-tls-key-pair") pod "nmstate-webhook-866bcb46dc-kq2cl" (UID: "1b31ad9f-374d-495a-85a8-161930a8dc23") : secret "openshift-nmstate-webhook" not found Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059606 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c6543ed-090e-4099-931a-d82e47304681-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-p7nj4\" (UID: \"1c6543ed-090e-4099-931a-d82e47304681\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059651 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/098dd26d-2e61-473f-bbe8-47be863f5b45-ovs-socket\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059721 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k74l\" (UniqueName: \"kubernetes.io/projected/1c6543ed-090e-4099-931a-d82e47304681-kube-api-access-5k74l\") pod \"nmstate-console-plugin-5c78fc5d65-p7nj4\" (UID: \"1c6543ed-090e-4099-931a-d82e47304681\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059711 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/098dd26d-2e61-473f-bbe8-47be863f5b45-dbus-socket\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.059754 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fltv7\" (UniqueName: \"kubernetes.io/projected/1b31ad9f-374d-495a-85a8-161930a8dc23-kube-api-access-fltv7\") pod \"nmstate-webhook-866bcb46dc-kq2cl\" (UID: \"1b31ad9f-374d-495a-85a8-161930a8dc23\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.083317 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvk4\" (UniqueName: \"kubernetes.io/projected/098dd26d-2e61-473f-bbe8-47be863f5b45-kube-api-access-9xvk4\") pod \"nmstate-handler-njd2b\" (UID: \"098dd26d-2e61-473f-bbe8-47be863f5b45\") " pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.086698 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fltv7\" (UniqueName: \"kubernetes.io/projected/1b31ad9f-374d-495a-85a8-161930a8dc23-kube-api-access-fltv7\") pod \"nmstate-webhook-866bcb46dc-kq2cl\" (UID: \"1b31ad9f-374d-495a-85a8-161930a8dc23\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.089706 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsrcf\" (UniqueName: \"kubernetes.io/projected/1dd205b6-4b48-4e5c-8731-d4322d8eba49-kube-api-access-fsrcf\") pod \"nmstate-metrics-58c85c668d-gnnhx\" (UID: \"1dd205b6-4b48-4e5c-8731-d4322d8eba49\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.102795 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c65ff7679-2cmfs"] Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.103686 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.124890 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c65ff7679-2cmfs"] Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.130442 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.161473 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1c6543ed-090e-4099-931a-d82e47304681-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-p7nj4\" (UID: \"1c6543ed-090e-4099-931a-d82e47304681\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.161548 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c6543ed-090e-4099-931a-d82e47304681-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-p7nj4\" (UID: \"1c6543ed-090e-4099-931a-d82e47304681\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.161592 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k74l\" (UniqueName: \"kubernetes.io/projected/1c6543ed-090e-4099-931a-d82e47304681-kube-api-access-5k74l\") pod \"nmstate-console-plugin-5c78fc5d65-p7nj4\" (UID: \"1c6543ed-090e-4099-931a-d82e47304681\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.163081 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1c6543ed-090e-4099-931a-d82e47304681-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-p7nj4\" (UID: \"1c6543ed-090e-4099-931a-d82e47304681\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.165146 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.169685 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c6543ed-090e-4099-931a-d82e47304681-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-p7nj4\" (UID: \"1c6543ed-090e-4099-931a-d82e47304681\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.194832 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k74l\" (UniqueName: \"kubernetes.io/projected/1c6543ed-090e-4099-931a-d82e47304681-kube-api-access-5k74l\") pod \"nmstate-console-plugin-5c78fc5d65-p7nj4\" (UID: \"1c6543ed-090e-4099-931a-d82e47304681\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.244102 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.268868 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-console-config\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.272410 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-trusted-ca-bundle\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.272634 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-oauth-serving-cert\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.272677 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-oauth-config\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.273515 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg5w9\" (UniqueName: \"kubernetes.io/projected/d7336f40-57d5-4171-98ad-aeee272451ae-kube-api-access-vg5w9\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.273608 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-serving-cert\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.273635 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-service-ca\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.376954 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-trusted-ca-bundle\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.377029 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-oauth-serving-cert\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.377061 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-oauth-config\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.377167 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg5w9\" (UniqueName: \"kubernetes.io/projected/d7336f40-57d5-4171-98ad-aeee272451ae-kube-api-access-vg5w9\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.377223 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-serving-cert\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.377249 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-service-ca\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.377313 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-console-config\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.379198 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-trusted-ca-bundle\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.380192 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-oauth-serving-cert\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.380755 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-console-config\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.387289 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-service-ca\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.404521 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-serving-cert\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.405004 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-oauth-config\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.427766 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg5w9\" (UniqueName: \"kubernetes.io/projected/d7336f40-57d5-4171-98ad-aeee272451ae-kube-api-access-vg5w9\") pod \"console-5c65ff7679-2cmfs\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.548461 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.581109 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1b31ad9f-374d-495a-85a8-161930a8dc23-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-kq2cl\" (UID: \"1b31ad9f-374d-495a-85a8-161930a8dc23\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.584603 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1b31ad9f-374d-495a-85a8-161930a8dc23-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-kq2cl\" (UID: \"1b31ad9f-374d-495a-85a8-161930a8dc23\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.627391 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx"] Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.751303 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.753812 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-njd2b" event={"ID":"098dd26d-2e61-473f-bbe8-47be863f5b45","Type":"ContainerStarted","Data":"fdfa1dddade3989c31b1c5d11f2c85153594870466b9801e4febf29d56a49204"} Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.760256 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jcxqv" podUID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerName="registry-server" containerID="cri-o://6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158" gracePeriod=2 Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.760349 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx" event={"ID":"1dd205b6-4b48-4e5c-8731-d4322d8eba49","Type":"ContainerStarted","Data":"f7f910cb80cae4d7ac799e3db5394817a1a8b3e70ff1730c7ecd485cde47f24f"} Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.764392 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4"] Feb 17 16:17:34 crc kubenswrapper[4874]: W0217 16:17:34.780353 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c6543ed_090e_4099_931a_d82e47304681.slice/crio-c73f9731d2e2897c7374fa920693e376fb30faad58427c1d126a129c7b21410d WatchSource:0}: Error finding container c73f9731d2e2897c7374fa920693e376fb30faad58427c1d126a129c7b21410d: Status 404 returned error can't find the container with id c73f9731d2e2897c7374fa920693e376fb30faad58427c1d126a129c7b21410d Feb 17 16:17:34 crc kubenswrapper[4874]: I0217 16:17:34.991448 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c65ff7679-2cmfs"] Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.163943 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl"] Feb 17 16:17:35 crc kubenswrapper[4874]: W0217 16:17:35.172325 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b31ad9f_374d_495a_85a8_161930a8dc23.slice/crio-559f9b75c2da58ba4dfb228071825d4e382bef00bca632418592081b92b07b5a WatchSource:0}: Error finding container 559f9b75c2da58ba4dfb228071825d4e382bef00bca632418592081b92b07b5a: Status 404 returned error can't find the container with id 559f9b75c2da58ba4dfb228071825d4e382bef00bca632418592081b92b07b5a Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.214448 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.392402 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kktb\" (UniqueName: \"kubernetes.io/projected/59de09e8-8e33-4a5d-b243-7a749402cef1-kube-api-access-2kktb\") pod \"59de09e8-8e33-4a5d-b243-7a749402cef1\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.392916 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-catalog-content\") pod \"59de09e8-8e33-4a5d-b243-7a749402cef1\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.392976 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-utilities\") pod \"59de09e8-8e33-4a5d-b243-7a749402cef1\" (UID: \"59de09e8-8e33-4a5d-b243-7a749402cef1\") " Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.394056 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-utilities" (OuterVolumeSpecName: "utilities") pod "59de09e8-8e33-4a5d-b243-7a749402cef1" (UID: "59de09e8-8e33-4a5d-b243-7a749402cef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.398190 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59de09e8-8e33-4a5d-b243-7a749402cef1-kube-api-access-2kktb" (OuterVolumeSpecName: "kube-api-access-2kktb") pod "59de09e8-8e33-4a5d-b243-7a749402cef1" (UID: "59de09e8-8e33-4a5d-b243-7a749402cef1"). InnerVolumeSpecName "kube-api-access-2kktb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.494262 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kktb\" (UniqueName: \"kubernetes.io/projected/59de09e8-8e33-4a5d-b243-7a749402cef1-kube-api-access-2kktb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.494294 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.557651 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59de09e8-8e33-4a5d-b243-7a749402cef1" (UID: "59de09e8-8e33-4a5d-b243-7a749402cef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.596694 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59de09e8-8e33-4a5d-b243-7a749402cef1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.776078 4874 generic.go:334] "Generic (PLEG): container finished" podID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerID="6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158" exitCode=0 Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.776149 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcxqv" event={"ID":"59de09e8-8e33-4a5d-b243-7a749402cef1","Type":"ContainerDied","Data":"6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158"} Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.776161 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcxqv" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.776232 4874 scope.go:117] "RemoveContainer" containerID="6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.776220 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcxqv" event={"ID":"59de09e8-8e33-4a5d-b243-7a749402cef1","Type":"ContainerDied","Data":"bb6d47923442fbc9f08b0ebb2e29464de76477623e860f93d4ae8c529fb222b1"} Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.778648 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" event={"ID":"1b31ad9f-374d-495a-85a8-161930a8dc23","Type":"ContainerStarted","Data":"559f9b75c2da58ba4dfb228071825d4e382bef00bca632418592081b92b07b5a"} Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.779638 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" event={"ID":"1c6543ed-090e-4099-931a-d82e47304681","Type":"ContainerStarted","Data":"c73f9731d2e2897c7374fa920693e376fb30faad58427c1d126a129c7b21410d"} Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.781433 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c65ff7679-2cmfs" event={"ID":"d7336f40-57d5-4171-98ad-aeee272451ae","Type":"ContainerStarted","Data":"9eb33c0d9d7a1d5e1c496608f5c31d21e83eb99d5eeefbfd6cf3bbe554232b6d"} Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.781474 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c65ff7679-2cmfs" event={"ID":"d7336f40-57d5-4171-98ad-aeee272451ae","Type":"ContainerStarted","Data":"0c15f7def32914b5ccaf617330241249ba391f1417f997cd05e637b68d2ee2a7"} Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.793659 4874 scope.go:117] "RemoveContainer" containerID="351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.805080 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c65ff7679-2cmfs" podStartSLOduration=1.805060025 podStartE2EDuration="1.805060025s" podCreationTimestamp="2026-02-17 16:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:35.803162538 +0000 UTC m=+866.097551129" watchObservedRunningTime="2026-02-17 16:17:35.805060025 +0000 UTC m=+866.099448586" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.827433 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jcxqv"] Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.846790 4874 scope.go:117] "RemoveContainer" containerID="5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.848841 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jcxqv"] Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.866699 4874 scope.go:117] "RemoveContainer" containerID="6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158" Feb 17 16:17:35 crc kubenswrapper[4874]: E0217 16:17:35.868240 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158\": container with ID starting with 6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158 not found: ID does not exist" containerID="6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.868409 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158"} err="failed to get container status \"6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158\": rpc error: code = NotFound desc = could not find container \"6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158\": container with ID starting with 6773a3b86f5e281e40710226e0e7022a20627165f3801736e1f093810a656158 not found: ID does not exist" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.868532 4874 scope.go:117] "RemoveContainer" containerID="351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557" Feb 17 16:17:35 crc kubenswrapper[4874]: E0217 16:17:35.868946 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557\": container with ID starting with 351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557 not found: ID does not exist" containerID="351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.869105 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557"} err="failed to get container status \"351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557\": rpc error: code = NotFound desc = could not find container \"351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557\": container with ID starting with 351cd5dba863a0ac7ed1750cf699dc2f395a03441de0d4f0dbf569b7abdb7557 not found: ID does not exist" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.869214 4874 scope.go:117] "RemoveContainer" containerID="5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33" Feb 17 16:17:35 crc kubenswrapper[4874]: E0217 16:17:35.869670 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33\": container with ID starting with 5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33 not found: ID does not exist" containerID="5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33" Feb 17 16:17:35 crc kubenswrapper[4874]: I0217 16:17:35.869793 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33"} err="failed to get container status \"5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33\": rpc error: code = NotFound desc = could not find container \"5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33\": container with ID starting with 5feb736aaef13bdd692868a4debfb42e7a7f1996ff39137ce0c301e6c1856f33 not found: ID does not exist" Feb 17 16:17:36 crc kubenswrapper[4874]: I0217 16:17:36.466390 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59de09e8-8e33-4a5d-b243-7a749402cef1" path="/var/lib/kubelet/pods/59de09e8-8e33-4a5d-b243-7a749402cef1/volumes" Feb 17 16:17:37 crc kubenswrapper[4874]: I0217 16:17:37.811292 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx" event={"ID":"1dd205b6-4b48-4e5c-8731-d4322d8eba49","Type":"ContainerStarted","Data":"37d20661b142b5f12d40126d1326904804ca839f73af4768becba94ec50d8ce8"} Feb 17 16:17:37 crc kubenswrapper[4874]: I0217 16:17:37.813869 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" event={"ID":"1c6543ed-090e-4099-931a-d82e47304681","Type":"ContainerStarted","Data":"c7d51b9a5dd1099f9daa07a94f23f58d0e27c049fb7cabf5262d8eae1a3ff09a"} Feb 17 16:17:37 crc kubenswrapper[4874]: I0217 16:17:37.816652 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-njd2b" event={"ID":"098dd26d-2e61-473f-bbe8-47be863f5b45","Type":"ContainerStarted","Data":"23e711ec7ae36e6d0db3040486523bd68df6fd3809b259f05f23bcae17a6f999"} Feb 17 16:17:37 crc kubenswrapper[4874]: I0217 16:17:37.816741 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:37 crc kubenswrapper[4874]: I0217 16:17:37.818435 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" event={"ID":"1b31ad9f-374d-495a-85a8-161930a8dc23","Type":"ContainerStarted","Data":"c5edfbe4031321d17bd92dcd42a994c0fe9d188ed10789947f0a13fceeea5e51"} Feb 17 16:17:37 crc kubenswrapper[4874]: I0217 16:17:37.818834 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:17:37 crc kubenswrapper[4874]: I0217 16:17:37.871023 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-p7nj4" podStartSLOduration=2.384650304 podStartE2EDuration="4.871005334s" podCreationTimestamp="2026-02-17 16:17:33 +0000 UTC" firstStartedPulling="2026-02-17 16:17:34.782040453 +0000 UTC m=+865.076429014" lastFinishedPulling="2026-02-17 16:17:37.268395453 +0000 UTC m=+867.562784044" observedRunningTime="2026-02-17 16:17:37.837344987 +0000 UTC m=+868.131733548" watchObservedRunningTime="2026-02-17 16:17:37.871005334 +0000 UTC m=+868.165393905" Feb 17 16:17:37 crc kubenswrapper[4874]: I0217 16:17:37.874568 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" podStartSLOduration=2.781321425 podStartE2EDuration="4.874558893s" podCreationTimestamp="2026-02-17 16:17:33 +0000 UTC" firstStartedPulling="2026-02-17 16:17:35.175258448 +0000 UTC m=+865.469647009" lastFinishedPulling="2026-02-17 16:17:37.268495886 +0000 UTC m=+867.562884477" observedRunningTime="2026-02-17 16:17:37.868716327 +0000 UTC m=+868.163104918" watchObservedRunningTime="2026-02-17 16:17:37.874558893 +0000 UTC m=+868.168947474" Feb 17 16:17:37 crc kubenswrapper[4874]: I0217 16:17:37.896529 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-njd2b" podStartSLOduration=1.836997859 podStartE2EDuration="4.896500418s" podCreationTimestamp="2026-02-17 16:17:33 +0000 UTC" firstStartedPulling="2026-02-17 16:17:34.210893574 +0000 UTC m=+864.505282135" lastFinishedPulling="2026-02-17 16:17:37.270396133 +0000 UTC m=+867.564784694" observedRunningTime="2026-02-17 16:17:37.882335866 +0000 UTC m=+868.176724437" watchObservedRunningTime="2026-02-17 16:17:37.896500418 +0000 UTC m=+868.190889009" Feb 17 16:17:39 crc kubenswrapper[4874]: I0217 16:17:39.840564 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx" event={"ID":"1dd205b6-4b48-4e5c-8731-d4322d8eba49","Type":"ContainerStarted","Data":"a5d5327b38587f6d14dca2a70daa23a0fc9e15c96d6b016255b6a03788c5a9fb"} Feb 17 16:17:39 crc kubenswrapper[4874]: I0217 16:17:39.857532 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gnnhx" podStartSLOduration=1.8728585899999999 podStartE2EDuration="6.857511239s" podCreationTimestamp="2026-02-17 16:17:33 +0000 UTC" firstStartedPulling="2026-02-17 16:17:34.639726245 +0000 UTC m=+864.934114806" lastFinishedPulling="2026-02-17 16:17:39.624378894 +0000 UTC m=+869.918767455" observedRunningTime="2026-02-17 16:17:39.854578897 +0000 UTC m=+870.148967488" watchObservedRunningTime="2026-02-17 16:17:39.857511239 +0000 UTC m=+870.151899820" Feb 17 16:17:44 crc kubenswrapper[4874]: I0217 16:17:44.189668 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-njd2b" Feb 17 16:17:44 crc kubenswrapper[4874]: I0217 16:17:44.549288 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:44 crc kubenswrapper[4874]: I0217 16:17:44.549424 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:44 crc kubenswrapper[4874]: I0217 16:17:44.556815 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:44 crc kubenswrapper[4874]: I0217 16:17:44.904184 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:17:44 crc kubenswrapper[4874]: I0217 16:17:44.970886 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-65c4f977c4-rpvsb"] Feb 17 16:17:54 crc kubenswrapper[4874]: I0217 16:17:54.759639 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-kq2cl" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.497602 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kn5sx"] Feb 17 16:18:01 crc kubenswrapper[4874]: E0217 16:18:01.498983 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerName="extract-content" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.499008 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerName="extract-content" Feb 17 16:18:01 crc kubenswrapper[4874]: E0217 16:18:01.499038 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerName="registry-server" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.499050 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerName="registry-server" Feb 17 16:18:01 crc kubenswrapper[4874]: E0217 16:18:01.499109 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerName="extract-utilities" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.499121 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerName="extract-utilities" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.499358 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="59de09e8-8e33-4a5d-b243-7a749402cef1" containerName="registry-server" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.505050 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.520663 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kn5sx"] Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.597599 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-utilities\") pod \"redhat-marketplace-kn5sx\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.597829 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56vc7\" (UniqueName: \"kubernetes.io/projected/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-kube-api-access-56vc7\") pod \"redhat-marketplace-kn5sx\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.597869 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-catalog-content\") pod \"redhat-marketplace-kn5sx\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.699111 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56vc7\" (UniqueName: \"kubernetes.io/projected/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-kube-api-access-56vc7\") pod \"redhat-marketplace-kn5sx\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.699157 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-catalog-content\") pod \"redhat-marketplace-kn5sx\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.699207 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-utilities\") pod \"redhat-marketplace-kn5sx\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.699605 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-utilities\") pod \"redhat-marketplace-kn5sx\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.700062 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-catalog-content\") pod \"redhat-marketplace-kn5sx\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.717902 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56vc7\" (UniqueName: \"kubernetes.io/projected/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-kube-api-access-56vc7\") pod \"redhat-marketplace-kn5sx\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:01 crc kubenswrapper[4874]: I0217 16:18:01.826384 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:02 crc kubenswrapper[4874]: I0217 16:18:02.313923 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kn5sx"] Feb 17 16:18:03 crc kubenswrapper[4874]: I0217 16:18:03.055500 4874 generic.go:334] "Generic (PLEG): container finished" podID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerID="7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4" exitCode=0 Feb 17 16:18:03 crc kubenswrapper[4874]: I0217 16:18:03.055558 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kn5sx" event={"ID":"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3","Type":"ContainerDied","Data":"7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4"} Feb 17 16:18:03 crc kubenswrapper[4874]: I0217 16:18:03.055945 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kn5sx" event={"ID":"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3","Type":"ContainerStarted","Data":"7e3f85de02996540b8fc5103a1ede29b2cca6a00f80ec4790c14127feec0accf"} Feb 17 16:18:03 crc kubenswrapper[4874]: I0217 16:18:03.058701 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:18:06 crc kubenswrapper[4874]: I0217 16:18:06.084802 4874 generic.go:334] "Generic (PLEG): container finished" podID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerID="3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf" exitCode=0 Feb 17 16:18:06 crc kubenswrapper[4874]: I0217 16:18:06.084902 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kn5sx" event={"ID":"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3","Type":"ContainerDied","Data":"3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf"} Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.055778 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7ldl9"] Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.059790 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.068062 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ldl9"] Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.088004 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-catalog-content\") pod \"community-operators-7ldl9\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.088061 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-utilities\") pod \"community-operators-7ldl9\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.088139 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n4z8\" (UniqueName: \"kubernetes.io/projected/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-kube-api-access-9n4z8\") pod \"community-operators-7ldl9\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.111371 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kn5sx" event={"ID":"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3","Type":"ContainerStarted","Data":"7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8"} Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.152321 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kn5sx" podStartSLOduration=2.489583013 podStartE2EDuration="6.152297379s" podCreationTimestamp="2026-02-17 16:18:01 +0000 UTC" firstStartedPulling="2026-02-17 16:18:03.058469326 +0000 UTC m=+893.352857887" lastFinishedPulling="2026-02-17 16:18:06.721183692 +0000 UTC m=+897.015572253" observedRunningTime="2026-02-17 16:18:07.139048039 +0000 UTC m=+897.433436600" watchObservedRunningTime="2026-02-17 16:18:07.152297379 +0000 UTC m=+897.446685940" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.189229 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-catalog-content\") pod \"community-operators-7ldl9\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.189307 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-utilities\") pod \"community-operators-7ldl9\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.189592 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n4z8\" (UniqueName: \"kubernetes.io/projected/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-kube-api-access-9n4z8\") pod \"community-operators-7ldl9\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.190663 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-utilities\") pod \"community-operators-7ldl9\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.190936 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-catalog-content\") pod \"community-operators-7ldl9\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.262163 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n4z8\" (UniqueName: \"kubernetes.io/projected/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-kube-api-access-9n4z8\") pod \"community-operators-7ldl9\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.388310 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:07 crc kubenswrapper[4874]: I0217 16:18:07.712261 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ldl9"] Feb 17 16:18:08 crc kubenswrapper[4874]: I0217 16:18:08.141882 4874 generic.go:334] "Generic (PLEG): container finished" podID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerID="0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161" exitCode=0 Feb 17 16:18:08 crc kubenswrapper[4874]: I0217 16:18:08.143482 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ldl9" event={"ID":"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6","Type":"ContainerDied","Data":"0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161"} Feb 17 16:18:08 crc kubenswrapper[4874]: I0217 16:18:08.143508 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ldl9" event={"ID":"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6","Type":"ContainerStarted","Data":"656f3335a138c60a6542c4ee29035e61cc75a5fe3c8e6e7f721128643d54ccb4"} Feb 17 16:18:09 crc kubenswrapper[4874]: I0217 16:18:09.167377 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ldl9" event={"ID":"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6","Type":"ContainerStarted","Data":"dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc"} Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.040221 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-65c4f977c4-rpvsb" podUID="07fc5262-d078-4ff8-aa96-460615fbd47d" containerName="console" containerID="cri-o://eaeb4006cf5dbd34f13e3d518abca994f130b48a5bae2789ca84822437be86ec" gracePeriod=15 Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.181497 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65c4f977c4-rpvsb_07fc5262-d078-4ff8-aa96-460615fbd47d/console/0.log" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.181557 4874 generic.go:334] "Generic (PLEG): container finished" podID="07fc5262-d078-4ff8-aa96-460615fbd47d" containerID="eaeb4006cf5dbd34f13e3d518abca994f130b48a5bae2789ca84822437be86ec" exitCode=2 Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.181663 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65c4f977c4-rpvsb" event={"ID":"07fc5262-d078-4ff8-aa96-460615fbd47d","Type":"ContainerDied","Data":"eaeb4006cf5dbd34f13e3d518abca994f130b48a5bae2789ca84822437be86ec"} Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.184012 4874 generic.go:334] "Generic (PLEG): container finished" podID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerID="dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc" exitCode=0 Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.184060 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ldl9" event={"ID":"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6","Type":"ContainerDied","Data":"dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc"} Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.506230 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-65c4f977c4-rpvsb_07fc5262-d078-4ff8-aa96-460615fbd47d/console/0.log" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.506804 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.567648 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-oauth-serving-cert\") pod \"07fc5262-d078-4ff8-aa96-460615fbd47d\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.567690 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-console-config\") pod \"07fc5262-d078-4ff8-aa96-460615fbd47d\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.567742 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-service-ca\") pod \"07fc5262-d078-4ff8-aa96-460615fbd47d\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.567864 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-oauth-config\") pod \"07fc5262-d078-4ff8-aa96-460615fbd47d\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.567906 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-serving-cert\") pod \"07fc5262-d078-4ff8-aa96-460615fbd47d\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.567942 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-trusted-ca-bundle\") pod \"07fc5262-d078-4ff8-aa96-460615fbd47d\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.567991 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2gbs\" (UniqueName: \"kubernetes.io/projected/07fc5262-d078-4ff8-aa96-460615fbd47d-kube-api-access-z2gbs\") pod \"07fc5262-d078-4ff8-aa96-460615fbd47d\" (UID: \"07fc5262-d078-4ff8-aa96-460615fbd47d\") " Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.571456 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "07fc5262-d078-4ff8-aa96-460615fbd47d" (UID: "07fc5262-d078-4ff8-aa96-460615fbd47d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.573138 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-console-config" (OuterVolumeSpecName: "console-config") pod "07fc5262-d078-4ff8-aa96-460615fbd47d" (UID: "07fc5262-d078-4ff8-aa96-460615fbd47d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.573167 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-service-ca" (OuterVolumeSpecName: "service-ca") pod "07fc5262-d078-4ff8-aa96-460615fbd47d" (UID: "07fc5262-d078-4ff8-aa96-460615fbd47d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.573404 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "07fc5262-d078-4ff8-aa96-460615fbd47d" (UID: "07fc5262-d078-4ff8-aa96-460615fbd47d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.575268 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "07fc5262-d078-4ff8-aa96-460615fbd47d" (UID: "07fc5262-d078-4ff8-aa96-460615fbd47d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.575486 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07fc5262-d078-4ff8-aa96-460615fbd47d-kube-api-access-z2gbs" (OuterVolumeSpecName: "kube-api-access-z2gbs") pod "07fc5262-d078-4ff8-aa96-460615fbd47d" (UID: "07fc5262-d078-4ff8-aa96-460615fbd47d"). InnerVolumeSpecName "kube-api-access-z2gbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.576289 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "07fc5262-d078-4ff8-aa96-460615fbd47d" (UID: "07fc5262-d078-4ff8-aa96-460615fbd47d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.673252 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2gbs\" (UniqueName: \"kubernetes.io/projected/07fc5262-d078-4ff8-aa96-460615fbd47d-kube-api-access-z2gbs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.673328 4874 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.673343 4874 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.673355 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.673368 4874 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.673379 4874 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07fc5262-d078-4ff8-aa96-460615fbd47d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.673391 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07fc5262-d078-4ff8-aa96-460615fbd47d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:10 crc kubenswrapper[4874]: I0217 16:18:10.918767 4874 scope.go:117] "RemoveContainer" containerID="eaeb4006cf5dbd34f13e3d518abca994f130b48a5bae2789ca84822437be86ec" Feb 17 16:18:11 crc kubenswrapper[4874]: I0217 16:18:11.194357 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65c4f977c4-rpvsb" Feb 17 16:18:11 crc kubenswrapper[4874]: I0217 16:18:11.194416 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ldl9" event={"ID":"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6","Type":"ContainerStarted","Data":"6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936"} Feb 17 16:18:11 crc kubenswrapper[4874]: I0217 16:18:11.197168 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65c4f977c4-rpvsb" event={"ID":"07fc5262-d078-4ff8-aa96-460615fbd47d","Type":"ContainerDied","Data":"540cc54c2a74349e3a5570035ff79d42f582720788f03aea43295f89eb6d03a2"} Feb 17 16:18:11 crc kubenswrapper[4874]: I0217 16:18:11.238498 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7ldl9" podStartSLOduration=1.736971493 podStartE2EDuration="4.23848399s" podCreationTimestamp="2026-02-17 16:18:07 +0000 UTC" firstStartedPulling="2026-02-17 16:18:08.144547626 +0000 UTC m=+898.438936177" lastFinishedPulling="2026-02-17 16:18:10.646060113 +0000 UTC m=+900.940448674" observedRunningTime="2026-02-17 16:18:11.216258728 +0000 UTC m=+901.510647289" watchObservedRunningTime="2026-02-17 16:18:11.23848399 +0000 UTC m=+901.532872551" Feb 17 16:18:11 crc kubenswrapper[4874]: I0217 16:18:11.253630 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-65c4f977c4-rpvsb"] Feb 17 16:18:11 crc kubenswrapper[4874]: I0217 16:18:11.260593 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-65c4f977c4-rpvsb"] Feb 17 16:18:11 crc kubenswrapper[4874]: I0217 16:18:11.826531 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:11 crc kubenswrapper[4874]: I0217 16:18:11.826877 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:11 crc kubenswrapper[4874]: I0217 16:18:11.882114 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:12 crc kubenswrapper[4874]: I0217 16:18:12.243773 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:12 crc kubenswrapper[4874]: I0217 16:18:12.476801 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07fc5262-d078-4ff8-aa96-460615fbd47d" path="/var/lib/kubelet/pods/07fc5262-d078-4ff8-aa96-460615fbd47d/volumes" Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.062329 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kn5sx"] Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.227427 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kn5sx" podUID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerName="registry-server" containerID="cri-o://7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8" gracePeriod=2 Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.815584 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.851223 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-catalog-content\") pod \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.851335 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56vc7\" (UniqueName: \"kubernetes.io/projected/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-kube-api-access-56vc7\") pod \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.851444 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-utilities\") pod \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\" (UID: \"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3\") " Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.852434 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-utilities" (OuterVolumeSpecName: "utilities") pod "f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" (UID: "f0c653ba-ebfc-4722-b53e-b7e5b89af9b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.862047 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-kube-api-access-56vc7" (OuterVolumeSpecName: "kube-api-access-56vc7") pod "f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" (UID: "f0c653ba-ebfc-4722-b53e-b7e5b89af9b3"). InnerVolumeSpecName "kube-api-access-56vc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.895369 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" (UID: "f0c653ba-ebfc-4722-b53e-b7e5b89af9b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.952909 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.952943 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56vc7\" (UniqueName: \"kubernetes.io/projected/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-kube-api-access-56vc7\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:14 crc kubenswrapper[4874]: I0217 16:18:14.952954 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.235644 4874 generic.go:334] "Generic (PLEG): container finished" podID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerID="7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8" exitCode=0 Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.235704 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kn5sx" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.235697 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kn5sx" event={"ID":"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3","Type":"ContainerDied","Data":"7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8"} Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.235774 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kn5sx" event={"ID":"f0c653ba-ebfc-4722-b53e-b7e5b89af9b3","Type":"ContainerDied","Data":"7e3f85de02996540b8fc5103a1ede29b2cca6a00f80ec4790c14127feec0accf"} Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.235799 4874 scope.go:117] "RemoveContainer" containerID="7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.261263 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kn5sx"] Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.261413 4874 scope.go:117] "RemoveContainer" containerID="3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.268514 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kn5sx"] Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.286111 4874 scope.go:117] "RemoveContainer" containerID="7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.314110 4874 scope.go:117] "RemoveContainer" containerID="7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8" Feb 17 16:18:15 crc kubenswrapper[4874]: E0217 16:18:15.314512 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8\": container with ID starting with 7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8 not found: ID does not exist" containerID="7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.314562 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8"} err="failed to get container status \"7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8\": rpc error: code = NotFound desc = could not find container \"7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8\": container with ID starting with 7ec34baeb21f1d0ab1737038188cb2189e2876bdb7f1cbe32c6d9678e2d5ffa8 not found: ID does not exist" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.314597 4874 scope.go:117] "RemoveContainer" containerID="3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf" Feb 17 16:18:15 crc kubenswrapper[4874]: E0217 16:18:15.314923 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf\": container with ID starting with 3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf not found: ID does not exist" containerID="3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.314963 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf"} err="failed to get container status \"3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf\": rpc error: code = NotFound desc = could not find container \"3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf\": container with ID starting with 3a0cd73184324c74969956d3c739eb4897aa421e73ebf05adf37b598d51036bf not found: ID does not exist" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.314991 4874 scope.go:117] "RemoveContainer" containerID="7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4" Feb 17 16:18:15 crc kubenswrapper[4874]: E0217 16:18:15.315309 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4\": container with ID starting with 7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4 not found: ID does not exist" containerID="7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4" Feb 17 16:18:15 crc kubenswrapper[4874]: I0217 16:18:15.315335 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4"} err="failed to get container status \"7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4\": rpc error: code = NotFound desc = could not find container \"7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4\": container with ID starting with 7c49a934ddd879622b2fd586afde31f4381abb80b2235fa610b3b69ab4fdf7b4 not found: ID does not exist" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.476223 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" path="/var/lib/kubelet/pods/f0c653ba-ebfc-4722-b53e-b7e5b89af9b3/volumes" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.553852 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb"] Feb 17 16:18:16 crc kubenswrapper[4874]: E0217 16:18:16.554443 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerName="extract-content" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.554543 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerName="extract-content" Feb 17 16:18:16 crc kubenswrapper[4874]: E0217 16:18:16.554625 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07fc5262-d078-4ff8-aa96-460615fbd47d" containerName="console" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.554712 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="07fc5262-d078-4ff8-aa96-460615fbd47d" containerName="console" Feb 17 16:18:16 crc kubenswrapper[4874]: E0217 16:18:16.554805 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerName="registry-server" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.554878 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerName="registry-server" Feb 17 16:18:16 crc kubenswrapper[4874]: E0217 16:18:16.554967 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerName="extract-utilities" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.555048 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerName="extract-utilities" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.555348 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c653ba-ebfc-4722-b53e-b7e5b89af9b3" containerName="registry-server" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.555451 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fc5262-d078-4ff8-aa96-460615fbd47d" containerName="console" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.556814 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.559470 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.569861 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb"] Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.582868 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t96fk\" (UniqueName: \"kubernetes.io/projected/14fe6365-4102-4b73-a3ee-c2722b3317e0-kube-api-access-t96fk\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.582920 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.582954 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.684093 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t96fk\" (UniqueName: \"kubernetes.io/projected/14fe6365-4102-4b73-a3ee-c2722b3317e0-kube-api-access-t96fk\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.684142 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.684174 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.684659 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.684751 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.700241 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t96fk\" (UniqueName: \"kubernetes.io/projected/14fe6365-4102-4b73-a3ee-c2722b3317e0-kube-api-access-t96fk\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:16 crc kubenswrapper[4874]: I0217 16:18:16.872436 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:17 crc kubenswrapper[4874]: I0217 16:18:17.365167 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb"] Feb 17 16:18:17 crc kubenswrapper[4874]: I0217 16:18:17.389207 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:17 crc kubenswrapper[4874]: I0217 16:18:17.389357 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:17 crc kubenswrapper[4874]: I0217 16:18:17.443547 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:18 crc kubenswrapper[4874]: I0217 16:18:18.264196 4874 generic.go:334] "Generic (PLEG): container finished" podID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerID="ac8e562373b008d12f6f03ced56f94dd32cf2abbc300533dbfe1e694be7ffc2e" exitCode=0 Feb 17 16:18:18 crc kubenswrapper[4874]: I0217 16:18:18.264453 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" event={"ID":"14fe6365-4102-4b73-a3ee-c2722b3317e0","Type":"ContainerDied","Data":"ac8e562373b008d12f6f03ced56f94dd32cf2abbc300533dbfe1e694be7ffc2e"} Feb 17 16:18:18 crc kubenswrapper[4874]: I0217 16:18:18.264540 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" event={"ID":"14fe6365-4102-4b73-a3ee-c2722b3317e0","Type":"ContainerStarted","Data":"21e178986ee552f6c2fb8bcb542c7ba033c831d6b45a40f14f82bf96c2dd6cf2"} Feb 17 16:18:18 crc kubenswrapper[4874]: I0217 16:18:18.313900 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:20 crc kubenswrapper[4874]: I0217 16:18:20.280571 4874 generic.go:334] "Generic (PLEG): container finished" podID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerID="42581c5bd961c3487d71547d3e17c7e20a9ebac0bfe1344628768238a14f597d" exitCode=0 Feb 17 16:18:20 crc kubenswrapper[4874]: I0217 16:18:20.280681 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" event={"ID":"14fe6365-4102-4b73-a3ee-c2722b3317e0","Type":"ContainerDied","Data":"42581c5bd961c3487d71547d3e17c7e20a9ebac0bfe1344628768238a14f597d"} Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.258768 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ldl9"] Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.312654 4874 generic.go:334] "Generic (PLEG): container finished" podID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerID="22c6bf6a48b303acfbc665527a286f9f2d5e9e535285c6b6d714c4233d9b2807" exitCode=0 Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.312746 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" event={"ID":"14fe6365-4102-4b73-a3ee-c2722b3317e0","Type":"ContainerDied","Data":"22c6bf6a48b303acfbc665527a286f9f2d5e9e535285c6b6d714c4233d9b2807"} Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.312902 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7ldl9" podUID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerName="registry-server" containerID="cri-o://6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936" gracePeriod=2 Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.722371 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.761156 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-utilities\") pod \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.761450 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n4z8\" (UniqueName: \"kubernetes.io/projected/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-kube-api-access-9n4z8\") pod \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.761577 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-catalog-content\") pod \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\" (UID: \"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6\") " Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.771338 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-utilities" (OuterVolumeSpecName: "utilities") pod "8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" (UID: "8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.772887 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-kube-api-access-9n4z8" (OuterVolumeSpecName: "kube-api-access-9n4z8") pod "8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" (UID: "8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6"). InnerVolumeSpecName "kube-api-access-9n4z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.832875 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" (UID: "8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.863522 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.863554 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n4z8\" (UniqueName: \"kubernetes.io/projected/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-kube-api-access-9n4z8\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:21 crc kubenswrapper[4874]: I0217 16:18:21.863563 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.325417 4874 generic.go:334] "Generic (PLEG): container finished" podID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerID="6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936" exitCode=0 Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.325473 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ldl9" event={"ID":"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6","Type":"ContainerDied","Data":"6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936"} Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.325535 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ldl9" event={"ID":"8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6","Type":"ContainerDied","Data":"656f3335a138c60a6542c4ee29035e61cc75a5fe3c8e6e7f721128643d54ccb4"} Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.325537 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ldl9" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.325558 4874 scope.go:117] "RemoveContainer" containerID="6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.359379 4874 scope.go:117] "RemoveContainer" containerID="dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.387153 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ldl9"] Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.411414 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7ldl9"] Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.412851 4874 scope.go:117] "RemoveContainer" containerID="0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.439723 4874 scope.go:117] "RemoveContainer" containerID="6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936" Feb 17 16:18:22 crc kubenswrapper[4874]: E0217 16:18:22.441317 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936\": container with ID starting with 6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936 not found: ID does not exist" containerID="6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.441355 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936"} err="failed to get container status \"6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936\": rpc error: code = NotFound desc = could not find container \"6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936\": container with ID starting with 6b4eb660135da8dfb8f822f65d1110a3f19f57a1c9daffd891e878552048e936 not found: ID does not exist" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.441383 4874 scope.go:117] "RemoveContainer" containerID="dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc" Feb 17 16:18:22 crc kubenswrapper[4874]: E0217 16:18:22.441951 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc\": container with ID starting with dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc not found: ID does not exist" containerID="dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.442020 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc"} err="failed to get container status \"dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc\": rpc error: code = NotFound desc = could not find container \"dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc\": container with ID starting with dc46b3a8eaf038e3362df32848338e1b2b156cb7275d3e9e33d1b006d76747cc not found: ID does not exist" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.442065 4874 scope.go:117] "RemoveContainer" containerID="0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161" Feb 17 16:18:22 crc kubenswrapper[4874]: E0217 16:18:22.442680 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161\": container with ID starting with 0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161 not found: ID does not exist" containerID="0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.442720 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161"} err="failed to get container status \"0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161\": rpc error: code = NotFound desc = could not find container \"0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161\": container with ID starting with 0768ac2c5e086f598a26fdd2c5f1c2e00fb9d0a002326a9aba46671924efe161 not found: ID does not exist" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.467709 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" path="/var/lib/kubelet/pods/8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6/volumes" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.712495 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.777176 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t96fk\" (UniqueName: \"kubernetes.io/projected/14fe6365-4102-4b73-a3ee-c2722b3317e0-kube-api-access-t96fk\") pod \"14fe6365-4102-4b73-a3ee-c2722b3317e0\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.777297 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-bundle\") pod \"14fe6365-4102-4b73-a3ee-c2722b3317e0\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.777469 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-util\") pod \"14fe6365-4102-4b73-a3ee-c2722b3317e0\" (UID: \"14fe6365-4102-4b73-a3ee-c2722b3317e0\") " Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.779813 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-bundle" (OuterVolumeSpecName: "bundle") pod "14fe6365-4102-4b73-a3ee-c2722b3317e0" (UID: "14fe6365-4102-4b73-a3ee-c2722b3317e0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.781637 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fe6365-4102-4b73-a3ee-c2722b3317e0-kube-api-access-t96fk" (OuterVolumeSpecName: "kube-api-access-t96fk") pod "14fe6365-4102-4b73-a3ee-c2722b3317e0" (UID: "14fe6365-4102-4b73-a3ee-c2722b3317e0"). InnerVolumeSpecName "kube-api-access-t96fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.880131 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t96fk\" (UniqueName: \"kubernetes.io/projected/14fe6365-4102-4b73-a3ee-c2722b3317e0-kube-api-access-t96fk\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:22 crc kubenswrapper[4874]: I0217 16:18:22.880181 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:23 crc kubenswrapper[4874]: I0217 16:18:23.064751 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-util" (OuterVolumeSpecName: "util") pod "14fe6365-4102-4b73-a3ee-c2722b3317e0" (UID: "14fe6365-4102-4b73-a3ee-c2722b3317e0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:23 crc kubenswrapper[4874]: I0217 16:18:23.082857 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14fe6365-4102-4b73-a3ee-c2722b3317e0-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:23 crc kubenswrapper[4874]: I0217 16:18:23.336454 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" event={"ID":"14fe6365-4102-4b73-a3ee-c2722b3317e0","Type":"ContainerDied","Data":"21e178986ee552f6c2fb8bcb542c7ba033c831d6b45a40f14f82bf96c2dd6cf2"} Feb 17 16:18:23 crc kubenswrapper[4874]: I0217 16:18:23.337189 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e178986ee552f6c2fb8bcb542c7ba033c831d6b45a40f14f82bf96c2dd6cf2" Feb 17 16:18:23 crc kubenswrapper[4874]: I0217 16:18:23.336558 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.664480 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qsmmj"] Feb 17 16:18:30 crc kubenswrapper[4874]: E0217 16:18:30.665115 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerName="extract-utilities" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.665125 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerName="extract-utilities" Feb 17 16:18:30 crc kubenswrapper[4874]: E0217 16:18:30.665135 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerName="registry-server" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.665140 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerName="registry-server" Feb 17 16:18:30 crc kubenswrapper[4874]: E0217 16:18:30.665153 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerName="extract" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.665159 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerName="extract" Feb 17 16:18:30 crc kubenswrapper[4874]: E0217 16:18:30.665172 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerName="util" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.665177 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerName="util" Feb 17 16:18:30 crc kubenswrapper[4874]: E0217 16:18:30.665188 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerName="pull" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.665193 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerName="pull" Feb 17 16:18:30 crc kubenswrapper[4874]: E0217 16:18:30.665206 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerName="extract-content" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.665212 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerName="extract-content" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.665325 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fe6365-4102-4b73-a3ee-c2722b3317e0" containerName="extract" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.665331 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb7ab43-d6c5-4bc0-9397-7fbbf7a3e7d6" containerName="registry-server" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.666238 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.680699 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsmmj"] Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.726510 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-utilities\") pod \"certified-operators-qsmmj\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.726659 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288wd\" (UniqueName: \"kubernetes.io/projected/26ccc14b-6fe5-4e88-85ee-c5080be814e7-kube-api-access-288wd\") pod \"certified-operators-qsmmj\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.726733 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-catalog-content\") pod \"certified-operators-qsmmj\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.828241 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-288wd\" (UniqueName: \"kubernetes.io/projected/26ccc14b-6fe5-4e88-85ee-c5080be814e7-kube-api-access-288wd\") pod \"certified-operators-qsmmj\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.828337 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-catalog-content\") pod \"certified-operators-qsmmj\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.828400 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-utilities\") pod \"certified-operators-qsmmj\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.828872 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-catalog-content\") pod \"certified-operators-qsmmj\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.829206 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-utilities\") pod \"certified-operators-qsmmj\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.850679 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-288wd\" (UniqueName: \"kubernetes.io/projected/26ccc14b-6fe5-4e88-85ee-c5080be814e7-kube-api-access-288wd\") pod \"certified-operators-qsmmj\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:30 crc kubenswrapper[4874]: I0217 16:18:30.984522 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:31 crc kubenswrapper[4874]: I0217 16:18:31.446921 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsmmj"] Feb 17 16:18:32 crc kubenswrapper[4874]: I0217 16:18:32.411810 4874 generic.go:334] "Generic (PLEG): container finished" podID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerID="ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e" exitCode=0 Feb 17 16:18:32 crc kubenswrapper[4874]: I0217 16:18:32.411864 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsmmj" event={"ID":"26ccc14b-6fe5-4e88-85ee-c5080be814e7","Type":"ContainerDied","Data":"ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e"} Feb 17 16:18:32 crc kubenswrapper[4874]: I0217 16:18:32.412152 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsmmj" event={"ID":"26ccc14b-6fe5-4e88-85ee-c5080be814e7","Type":"ContainerStarted","Data":"d5f2fea31ac06df9be08cf6e797b1a0efa179c26aa074b17fe6b15a3b374bb59"} Feb 17 16:18:33 crc kubenswrapper[4874]: I0217 16:18:33.419945 4874 generic.go:334] "Generic (PLEG): container finished" podID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerID="33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124" exitCode=0 Feb 17 16:18:33 crc kubenswrapper[4874]: I0217 16:18:33.420053 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsmmj" event={"ID":"26ccc14b-6fe5-4e88-85ee-c5080be814e7","Type":"ContainerDied","Data":"33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124"} Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.443132 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsmmj" event={"ID":"26ccc14b-6fe5-4e88-85ee-c5080be814e7","Type":"ContainerStarted","Data":"349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e"} Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.487879 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qsmmj" podStartSLOduration=3.006412372 podStartE2EDuration="4.487861321s" podCreationTimestamp="2026-02-17 16:18:30 +0000 UTC" firstStartedPulling="2026-02-17 16:18:32.414020435 +0000 UTC m=+922.708408996" lastFinishedPulling="2026-02-17 16:18:33.895469384 +0000 UTC m=+924.189857945" observedRunningTime="2026-02-17 16:18:34.479967354 +0000 UTC m=+924.774355915" watchObservedRunningTime="2026-02-17 16:18:34.487861321 +0000 UTC m=+924.782249882" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.702657 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8"] Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.703583 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.708205 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lzrcx" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.708975 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.709214 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.709235 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.709502 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.741702 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8"] Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.786672 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a2dc1cd-626b-4d07-8260-cbfd9dadfa93-apiservice-cert\") pod \"metallb-operator-controller-manager-b5c586d76-ztwj8\" (UID: \"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93\") " pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.786715 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrd5\" (UniqueName: \"kubernetes.io/projected/1a2dc1cd-626b-4d07-8260-cbfd9dadfa93-kube-api-access-wcrd5\") pod \"metallb-operator-controller-manager-b5c586d76-ztwj8\" (UID: \"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93\") " pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.786785 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a2dc1cd-626b-4d07-8260-cbfd9dadfa93-webhook-cert\") pod \"metallb-operator-controller-manager-b5c586d76-ztwj8\" (UID: \"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93\") " pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.888190 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a2dc1cd-626b-4d07-8260-cbfd9dadfa93-apiservice-cert\") pod \"metallb-operator-controller-manager-b5c586d76-ztwj8\" (UID: \"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93\") " pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.888227 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrd5\" (UniqueName: \"kubernetes.io/projected/1a2dc1cd-626b-4d07-8260-cbfd9dadfa93-kube-api-access-wcrd5\") pod \"metallb-operator-controller-manager-b5c586d76-ztwj8\" (UID: \"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93\") " pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.888283 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a2dc1cd-626b-4d07-8260-cbfd9dadfa93-webhook-cert\") pod \"metallb-operator-controller-manager-b5c586d76-ztwj8\" (UID: \"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93\") " pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.896779 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a2dc1cd-626b-4d07-8260-cbfd9dadfa93-apiservice-cert\") pod \"metallb-operator-controller-manager-b5c586d76-ztwj8\" (UID: \"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93\") " pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.905749 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a2dc1cd-626b-4d07-8260-cbfd9dadfa93-webhook-cert\") pod \"metallb-operator-controller-manager-b5c586d76-ztwj8\" (UID: \"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93\") " pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.906334 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrd5\" (UniqueName: \"kubernetes.io/projected/1a2dc1cd-626b-4d07-8260-cbfd9dadfa93-kube-api-access-wcrd5\") pod \"metallb-operator-controller-manager-b5c586d76-ztwj8\" (UID: \"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93\") " pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.988169 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9"] Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.989620 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.991441 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.991526 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:18:34 crc kubenswrapper[4874]: I0217 16:18:34.991541 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mb9vm" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.003884 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9"] Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.020295 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.091928 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8177791a-4dee-4a43-9868-c06e52c2b536-webhook-cert\") pod \"metallb-operator-webhook-server-756c97bbfd-pv5c9\" (UID: \"8177791a-4dee-4a43-9868-c06e52c2b536\") " pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.092041 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzllg\" (UniqueName: \"kubernetes.io/projected/8177791a-4dee-4a43-9868-c06e52c2b536-kube-api-access-jzllg\") pod \"metallb-operator-webhook-server-756c97bbfd-pv5c9\" (UID: \"8177791a-4dee-4a43-9868-c06e52c2b536\") " pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.092063 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8177791a-4dee-4a43-9868-c06e52c2b536-apiservice-cert\") pod \"metallb-operator-webhook-server-756c97bbfd-pv5c9\" (UID: \"8177791a-4dee-4a43-9868-c06e52c2b536\") " pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.193488 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzllg\" (UniqueName: \"kubernetes.io/projected/8177791a-4dee-4a43-9868-c06e52c2b536-kube-api-access-jzllg\") pod \"metallb-operator-webhook-server-756c97bbfd-pv5c9\" (UID: \"8177791a-4dee-4a43-9868-c06e52c2b536\") " pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.193538 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8177791a-4dee-4a43-9868-c06e52c2b536-apiservice-cert\") pod \"metallb-operator-webhook-server-756c97bbfd-pv5c9\" (UID: \"8177791a-4dee-4a43-9868-c06e52c2b536\") " pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.193622 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8177791a-4dee-4a43-9868-c06e52c2b536-webhook-cert\") pod \"metallb-operator-webhook-server-756c97bbfd-pv5c9\" (UID: \"8177791a-4dee-4a43-9868-c06e52c2b536\") " pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.200423 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8177791a-4dee-4a43-9868-c06e52c2b536-apiservice-cert\") pod \"metallb-operator-webhook-server-756c97bbfd-pv5c9\" (UID: \"8177791a-4dee-4a43-9868-c06e52c2b536\") " pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.211550 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8177791a-4dee-4a43-9868-c06e52c2b536-webhook-cert\") pod \"metallb-operator-webhook-server-756c97bbfd-pv5c9\" (UID: \"8177791a-4dee-4a43-9868-c06e52c2b536\") " pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.216842 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzllg\" (UniqueName: \"kubernetes.io/projected/8177791a-4dee-4a43-9868-c06e52c2b536-kube-api-access-jzllg\") pod \"metallb-operator-webhook-server-756c97bbfd-pv5c9\" (UID: \"8177791a-4dee-4a43-9868-c06e52c2b536\") " pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.306896 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.496792 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8"] Feb 17 16:18:35 crc kubenswrapper[4874]: I0217 16:18:35.795490 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9"] Feb 17 16:18:35 crc kubenswrapper[4874]: W0217 16:18:35.804749 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8177791a_4dee_4a43_9868_c06e52c2b536.slice/crio-ea33f816d277925f8233c8f6f979306da06ed77a0b08ac63cc9dd7e1d95d4553 WatchSource:0}: Error finding container ea33f816d277925f8233c8f6f979306da06ed77a0b08ac63cc9dd7e1d95d4553: Status 404 returned error can't find the container with id ea33f816d277925f8233c8f6f979306da06ed77a0b08ac63cc9dd7e1d95d4553 Feb 17 16:18:36 crc kubenswrapper[4874]: I0217 16:18:36.469738 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" event={"ID":"8177791a-4dee-4a43-9868-c06e52c2b536","Type":"ContainerStarted","Data":"ea33f816d277925f8233c8f6f979306da06ed77a0b08ac63cc9dd7e1d95d4553"} Feb 17 16:18:36 crc kubenswrapper[4874]: I0217 16:18:36.470495 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" event={"ID":"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93","Type":"ContainerStarted","Data":"a7de132c6c45251bc78d9c007b717fa3f887adf93f9342e7b91b8aa0282f01b6"} Feb 17 16:18:38 crc kubenswrapper[4874]: I0217 16:18:38.490663 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" event={"ID":"1a2dc1cd-626b-4d07-8260-cbfd9dadfa93","Type":"ContainerStarted","Data":"d57d7b08a3fb55e2b75c13824387e2c2207952a81e34eefc070c59eea4ccbab5"} Feb 17 16:18:38 crc kubenswrapper[4874]: I0217 16:18:38.490991 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:18:38 crc kubenswrapper[4874]: I0217 16:18:38.512920 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" podStartSLOduration=1.869959739 podStartE2EDuration="4.512899003s" podCreationTimestamp="2026-02-17 16:18:34 +0000 UTC" firstStartedPulling="2026-02-17 16:18:35.513904768 +0000 UTC m=+925.808293329" lastFinishedPulling="2026-02-17 16:18:38.156844032 +0000 UTC m=+928.451232593" observedRunningTime="2026-02-17 16:18:38.509668603 +0000 UTC m=+928.804057184" watchObservedRunningTime="2026-02-17 16:18:38.512899003 +0000 UTC m=+928.807287584" Feb 17 16:18:40 crc kubenswrapper[4874]: I0217 16:18:40.504786 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" event={"ID":"8177791a-4dee-4a43-9868-c06e52c2b536","Type":"ContainerStarted","Data":"4d6aad6b08558b66beae506d119dfb833de8ed529ddcd5a114cd587ce74e510b"} Feb 17 16:18:40 crc kubenswrapper[4874]: I0217 16:18:40.505030 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:18:40 crc kubenswrapper[4874]: I0217 16:18:40.548827 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" podStartSLOduration=2.363561051 podStartE2EDuration="6.548806597s" podCreationTimestamp="2026-02-17 16:18:34 +0000 UTC" firstStartedPulling="2026-02-17 16:18:35.806612625 +0000 UTC m=+926.101001186" lastFinishedPulling="2026-02-17 16:18:39.991858131 +0000 UTC m=+930.286246732" observedRunningTime="2026-02-17 16:18:40.545850713 +0000 UTC m=+930.840239294" watchObservedRunningTime="2026-02-17 16:18:40.548806597 +0000 UTC m=+930.843195168" Feb 17 16:18:40 crc kubenswrapper[4874]: I0217 16:18:40.984959 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:40 crc kubenswrapper[4874]: I0217 16:18:40.985241 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:41 crc kubenswrapper[4874]: I0217 16:18:41.048088 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:41 crc kubenswrapper[4874]: I0217 16:18:41.586108 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:43 crc kubenswrapper[4874]: I0217 16:18:43.458686 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsmmj"] Feb 17 16:18:43 crc kubenswrapper[4874]: I0217 16:18:43.526263 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qsmmj" podUID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerName="registry-server" containerID="cri-o://349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e" gracePeriod=2 Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.495209 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.551374 4874 generic.go:334] "Generic (PLEG): container finished" podID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerID="349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e" exitCode=0 Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.551422 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsmmj" event={"ID":"26ccc14b-6fe5-4e88-85ee-c5080be814e7","Type":"ContainerDied","Data":"349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e"} Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.551491 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsmmj" event={"ID":"26ccc14b-6fe5-4e88-85ee-c5080be814e7","Type":"ContainerDied","Data":"d5f2fea31ac06df9be08cf6e797b1a0efa179c26aa074b17fe6b15a3b374bb59"} Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.551515 4874 scope.go:117] "RemoveContainer" containerID="349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.551453 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsmmj" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.556191 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-utilities\") pod \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.556242 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-288wd\" (UniqueName: \"kubernetes.io/projected/26ccc14b-6fe5-4e88-85ee-c5080be814e7-kube-api-access-288wd\") pod \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.556288 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-catalog-content\") pod \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\" (UID: \"26ccc14b-6fe5-4e88-85ee-c5080be814e7\") " Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.557196 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-utilities" (OuterVolumeSpecName: "utilities") pod "26ccc14b-6fe5-4e88-85ee-c5080be814e7" (UID: "26ccc14b-6fe5-4e88-85ee-c5080be814e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.565388 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ccc14b-6fe5-4e88-85ee-c5080be814e7-kube-api-access-288wd" (OuterVolumeSpecName: "kube-api-access-288wd") pod "26ccc14b-6fe5-4e88-85ee-c5080be814e7" (UID: "26ccc14b-6fe5-4e88-85ee-c5080be814e7"). InnerVolumeSpecName "kube-api-access-288wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.606254 4874 scope.go:117] "RemoveContainer" containerID="33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.630358 4874 scope.go:117] "RemoveContainer" containerID="ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.654296 4874 scope.go:117] "RemoveContainer" containerID="349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e" Feb 17 16:18:44 crc kubenswrapper[4874]: E0217 16:18:44.654880 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e\": container with ID starting with 349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e not found: ID does not exist" containerID="349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.654930 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e"} err="failed to get container status \"349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e\": rpc error: code = NotFound desc = could not find container \"349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e\": container with ID starting with 349f53a0a3da5c6b0c594a6fe5a2eb3a9b190c672fd647f1715dbd97a2360f2e not found: ID does not exist" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.654958 4874 scope.go:117] "RemoveContainer" containerID="33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124" Feb 17 16:18:44 crc kubenswrapper[4874]: E0217 16:18:44.655593 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124\": container with ID starting with 33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124 not found: ID does not exist" containerID="33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.655644 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124"} err="failed to get container status \"33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124\": rpc error: code = NotFound desc = could not find container \"33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124\": container with ID starting with 33c0089b8b586bce0e2274a2365c9e61a72d3e6ab0cd000bbe52f43976ad0124 not found: ID does not exist" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.655680 4874 scope.go:117] "RemoveContainer" containerID="ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e" Feb 17 16:18:44 crc kubenswrapper[4874]: E0217 16:18:44.656103 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e\": container with ID starting with ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e not found: ID does not exist" containerID="ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.656142 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e"} err="failed to get container status \"ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e\": rpc error: code = NotFound desc = could not find container \"ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e\": container with ID starting with ca84c948ad2256d14aa1a90dfb6a943ccfcfbabe7e716e144a80f02373674d2e not found: ID does not exist" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.657959 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.657986 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-288wd\" (UniqueName: \"kubernetes.io/projected/26ccc14b-6fe5-4e88-85ee-c5080be814e7-kube-api-access-288wd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.905318 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26ccc14b-6fe5-4e88-85ee-c5080be814e7" (UID: "26ccc14b-6fe5-4e88-85ee-c5080be814e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4874]: I0217 16:18:44.962894 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ccc14b-6fe5-4e88-85ee-c5080be814e7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:45 crc kubenswrapper[4874]: I0217 16:18:45.186257 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsmmj"] Feb 17 16:18:45 crc kubenswrapper[4874]: I0217 16:18:45.192859 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qsmmj"] Feb 17 16:18:46 crc kubenswrapper[4874]: I0217 16:18:46.467222 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" path="/var/lib/kubelet/pods/26ccc14b-6fe5-4e88-85ee-c5080be814e7/volumes" Feb 17 16:18:55 crc kubenswrapper[4874]: I0217 16:18:55.318434 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-756c97bbfd-pv5c9" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.022856 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b5c586d76-ztwj8" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.819091 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4xgq6"] Feb 17 16:19:15 crc kubenswrapper[4874]: E0217 16:19:15.819443 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerName="extract-utilities" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.819466 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerName="extract-utilities" Feb 17 16:19:15 crc kubenswrapper[4874]: E0217 16:19:15.819480 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerName="registry-server" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.819488 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerName="registry-server" Feb 17 16:19:15 crc kubenswrapper[4874]: E0217 16:19:15.819518 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerName="extract-content" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.819527 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerName="extract-content" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.819704 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ccc14b-6fe5-4e88-85ee-c5080be814e7" containerName="registry-server" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.822795 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.825280 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.825582 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-d6pj7" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.826288 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.829805 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d"] Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.832243 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.834283 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.851458 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d"] Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.908284 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-bbthf"] Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.909501 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bbthf" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.911948 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.911948 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.912017 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.912075 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nkzgk" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.919785 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-frr-sockets\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.919837 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/abf374ec-8d79-48ac-ac9b-9cf5c81d0adf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-6pq7d\" (UID: \"abf374ec-8d79-48ac-ac9b-9cf5c81d0adf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.919888 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktt8\" (UniqueName: \"kubernetes.io/projected/abf374ec-8d79-48ac-ac9b-9cf5c81d0adf-kube-api-access-xktt8\") pod \"frr-k8s-webhook-server-78b44bf5bb-6pq7d\" (UID: \"abf374ec-8d79-48ac-ac9b-9cf5c81d0adf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.919927 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-metrics\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.919988 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/feb8be07-358f-49c3-a27c-53054e353a5d-metrics-certs\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.920019 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-reloader\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.920038 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hpgp\" (UniqueName: \"kubernetes.io/projected/feb8be07-358f-49c3-a27c-53054e353a5d-kube-api-access-9hpgp\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.920071 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-frr-conf\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.920112 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/feb8be07-358f-49c3-a27c-53054e353a5d-frr-startup\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.933758 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-n9vxs"] Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.935754 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:15 crc kubenswrapper[4874]: W0217 16:19:15.937757 4874 reflector.go:561] object-"metallb-system"/"controller-certs-secret": failed to list *v1.Secret: secrets "controller-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Feb 17 16:19:15 crc kubenswrapper[4874]: E0217 16:19:15.937810 4874 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"controller-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 16:19:15 crc kubenswrapper[4874]: I0217 16:19:15.949300 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-n9vxs"] Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.021795 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73488a2d-521a-4ccd-a9ea-aa905b51e302-metrics-certs\") pod \"controller-69bbfbf88f-n9vxs\" (UID: \"73488a2d-521a-4ccd-a9ea-aa905b51e302\") " pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.021851 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xktt8\" (UniqueName: \"kubernetes.io/projected/abf374ec-8d79-48ac-ac9b-9cf5c81d0adf-kube-api-access-xktt8\") pod \"frr-k8s-webhook-server-78b44bf5bb-6pq7d\" (UID: \"abf374ec-8d79-48ac-ac9b-9cf5c81d0adf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022154 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zwd4\" (UniqueName: \"kubernetes.io/projected/73488a2d-521a-4ccd-a9ea-aa905b51e302-kube-api-access-2zwd4\") pod \"controller-69bbfbf88f-n9vxs\" (UID: \"73488a2d-521a-4ccd-a9ea-aa905b51e302\") " pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022212 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-metrics\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022277 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-memberlist\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022301 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-metallb-excludel2\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022332 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/feb8be07-358f-49c3-a27c-53054e353a5d-metrics-certs\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022396 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-reloader\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022421 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpgp\" (UniqueName: \"kubernetes.io/projected/feb8be07-358f-49c3-a27c-53054e353a5d-kube-api-access-9hpgp\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022466 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-metrics-certs\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022507 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-frr-conf\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022529 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/feb8be07-358f-49c3-a27c-53054e353a5d-frr-startup\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022548 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-frr-sockets\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022587 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73488a2d-521a-4ccd-a9ea-aa905b51e302-cert\") pod \"controller-69bbfbf88f-n9vxs\" (UID: \"73488a2d-521a-4ccd-a9ea-aa905b51e302\") " pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022616 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsv7k\" (UniqueName: \"kubernetes.io/projected/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-kube-api-access-gsv7k\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022639 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/abf374ec-8d79-48ac-ac9b-9cf5c81d0adf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-6pq7d\" (UID: \"abf374ec-8d79-48ac-ac9b-9cf5c81d0adf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.022704 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-metrics\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.022810 4874 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.022828 4874 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.022848 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/feb8be07-358f-49c3-a27c-53054e353a5d-metrics-certs podName:feb8be07-358f-49c3-a27c-53054e353a5d nodeName:}" failed. No retries permitted until 2026-02-17 16:19:16.522833582 +0000 UTC m=+966.817222133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/feb8be07-358f-49c3-a27c-53054e353a5d-metrics-certs") pod "frr-k8s-4xgq6" (UID: "feb8be07-358f-49c3-a27c-53054e353a5d") : secret "frr-k8s-certs-secret" not found Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.022880 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abf374ec-8d79-48ac-ac9b-9cf5c81d0adf-cert podName:abf374ec-8d79-48ac-ac9b-9cf5c81d0adf nodeName:}" failed. No retries permitted until 2026-02-17 16:19:16.522862802 +0000 UTC m=+966.817251363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/abf374ec-8d79-48ac-ac9b-9cf5c81d0adf-cert") pod "frr-k8s-webhook-server-78b44bf5bb-6pq7d" (UID: "abf374ec-8d79-48ac-ac9b-9cf5c81d0adf") : secret "frr-k8s-webhook-server-cert" not found Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.023151 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-frr-conf\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.023206 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-frr-sockets\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.023409 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/feb8be07-358f-49c3-a27c-53054e353a5d-reloader\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.023974 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/feb8be07-358f-49c3-a27c-53054e353a5d-frr-startup\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.042344 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xktt8\" (UniqueName: \"kubernetes.io/projected/abf374ec-8d79-48ac-ac9b-9cf5c81d0adf-kube-api-access-xktt8\") pod \"frr-k8s-webhook-server-78b44bf5bb-6pq7d\" (UID: \"abf374ec-8d79-48ac-ac9b-9cf5c81d0adf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.053816 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpgp\" (UniqueName: \"kubernetes.io/projected/feb8be07-358f-49c3-a27c-53054e353a5d-kube-api-access-9hpgp\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.124594 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsv7k\" (UniqueName: \"kubernetes.io/projected/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-kube-api-access-gsv7k\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.124963 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73488a2d-521a-4ccd-a9ea-aa905b51e302-metrics-certs\") pod \"controller-69bbfbf88f-n9vxs\" (UID: \"73488a2d-521a-4ccd-a9ea-aa905b51e302\") " pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.125007 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zwd4\" (UniqueName: \"kubernetes.io/projected/73488a2d-521a-4ccd-a9ea-aa905b51e302-kube-api-access-2zwd4\") pod \"controller-69bbfbf88f-n9vxs\" (UID: \"73488a2d-521a-4ccd-a9ea-aa905b51e302\") " pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.125052 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-memberlist\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.125080 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-metallb-excludel2\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.125170 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-metrics-certs\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.125220 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73488a2d-521a-4ccd-a9ea-aa905b51e302-cert\") pod \"controller-69bbfbf88f-n9vxs\" (UID: \"73488a2d-521a-4ccd-a9ea-aa905b51e302\") " pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.125285 4874 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.125361 4874 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.125378 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-memberlist podName:1b81504e-be8e-4fbd-a5c6-c48ee4dea72b nodeName:}" failed. No retries permitted until 2026-02-17 16:19:16.62535186 +0000 UTC m=+966.919740441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-memberlist") pod "speaker-bbthf" (UID: "1b81504e-be8e-4fbd-a5c6-c48ee4dea72b") : secret "metallb-memberlist" not found Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.125457 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-metrics-certs podName:1b81504e-be8e-4fbd-a5c6-c48ee4dea72b nodeName:}" failed. No retries permitted until 2026-02-17 16:19:16.625435292 +0000 UTC m=+966.919823843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-metrics-certs") pod "speaker-bbthf" (UID: "1b81504e-be8e-4fbd-a5c6-c48ee4dea72b") : secret "speaker-certs-secret" not found Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.125745 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-metallb-excludel2\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.126595 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.139806 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73488a2d-521a-4ccd-a9ea-aa905b51e302-cert\") pod \"controller-69bbfbf88f-n9vxs\" (UID: \"73488a2d-521a-4ccd-a9ea-aa905b51e302\") " pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.147957 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsv7k\" (UniqueName: \"kubernetes.io/projected/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-kube-api-access-gsv7k\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.165297 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zwd4\" (UniqueName: \"kubernetes.io/projected/73488a2d-521a-4ccd-a9ea-aa905b51e302-kube-api-access-2zwd4\") pod \"controller-69bbfbf88f-n9vxs\" (UID: \"73488a2d-521a-4ccd-a9ea-aa905b51e302\") " pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.532161 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/feb8be07-358f-49c3-a27c-53054e353a5d-metrics-certs\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.534420 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/abf374ec-8d79-48ac-ac9b-9cf5c81d0adf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-6pq7d\" (UID: \"abf374ec-8d79-48ac-ac9b-9cf5c81d0adf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.537030 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/feb8be07-358f-49c3-a27c-53054e353a5d-metrics-certs\") pod \"frr-k8s-4xgq6\" (UID: \"feb8be07-358f-49c3-a27c-53054e353a5d\") " pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.539560 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/abf374ec-8d79-48ac-ac9b-9cf5c81d0adf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-6pq7d\" (UID: \"abf374ec-8d79-48ac-ac9b-9cf5c81d0adf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.636467 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-memberlist\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.636572 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-metrics-certs\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.636673 4874 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:19:16 crc kubenswrapper[4874]: E0217 16:19:16.636755 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-memberlist podName:1b81504e-be8e-4fbd-a5c6-c48ee4dea72b nodeName:}" failed. No retries permitted until 2026-02-17 16:19:17.636732193 +0000 UTC m=+967.931120774 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-memberlist") pod "speaker-bbthf" (UID: "1b81504e-be8e-4fbd-a5c6-c48ee4dea72b") : secret "metallb-memberlist" not found Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.640045 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-metrics-certs\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.744895 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.754357 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.804861 4874 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.811702 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73488a2d-521a-4ccd-a9ea-aa905b51e302-metrics-certs\") pod \"controller-69bbfbf88f-n9vxs\" (UID: \"73488a2d-521a-4ccd-a9ea-aa905b51e302\") " pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:16 crc kubenswrapper[4874]: I0217 16:19:16.851432 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.287267 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d"] Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.388677 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-n9vxs"] Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.658066 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-memberlist\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.663065 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1b81504e-be8e-4fbd-a5c6-c48ee4dea72b-memberlist\") pod \"speaker-bbthf\" (UID: \"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b\") " pod="metallb-system/speaker-bbthf" Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.728527 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bbthf" Feb 17 16:19:17 crc kubenswrapper[4874]: W0217 16:19:17.753877 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b81504e_be8e_4fbd_a5c6_c48ee4dea72b.slice/crio-ab340337d25fe615c6229b53a1d3f8df71f90fc62bd92888dbaa7559bd29f469 WatchSource:0}: Error finding container ab340337d25fe615c6229b53a1d3f8df71f90fc62bd92888dbaa7559bd29f469: Status 404 returned error can't find the container with id ab340337d25fe615c6229b53a1d3f8df71f90fc62bd92888dbaa7559bd29f469 Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.907850 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" event={"ID":"abf374ec-8d79-48ac-ac9b-9cf5c81d0adf","Type":"ContainerStarted","Data":"84b49a69d8ec1599daf8136ad5b1eaa357ab0f86d5258460479b2b2be9b3c613"} Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.910054 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerStarted","Data":"1eaece5c151f9b0184e3a799e3e1632ea609b3cf3ed7f3258312687908787e80"} Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.911150 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bbthf" event={"ID":"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b","Type":"ContainerStarted","Data":"ab340337d25fe615c6229b53a1d3f8df71f90fc62bd92888dbaa7559bd29f469"} Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.912793 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-n9vxs" event={"ID":"73488a2d-521a-4ccd-a9ea-aa905b51e302","Type":"ContainerStarted","Data":"694e31c7f1f3903a86d32e351a3d9f893afe895f87fc99bc755024e824781e2f"} Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.912820 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-n9vxs" event={"ID":"73488a2d-521a-4ccd-a9ea-aa905b51e302","Type":"ContainerStarted","Data":"1375b545c748b57a978bc5e849c74cf35c13e9ab01bd107c8bb7fda243cdb5bd"} Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.912857 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-n9vxs" event={"ID":"73488a2d-521a-4ccd-a9ea-aa905b51e302","Type":"ContainerStarted","Data":"3361679c1d4e2935a288ba9c574fa259d0eff3844fa05c4f68d9691e87b8f0d6"} Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.914035 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:17 crc kubenswrapper[4874]: I0217 16:19:17.935437 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-n9vxs" podStartSLOduration=2.935415928 podStartE2EDuration="2.935415928s" podCreationTimestamp="2026-02-17 16:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:17.930104976 +0000 UTC m=+968.224493537" watchObservedRunningTime="2026-02-17 16:19:17.935415928 +0000 UTC m=+968.229804489" Feb 17 16:19:18 crc kubenswrapper[4874]: I0217 16:19:18.926644 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bbthf" event={"ID":"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b","Type":"ContainerStarted","Data":"1ffc02bf10d4fcc0c474b3cbf0428a7a25f60d7a7748886fcd0f1a52d3f8b039"} Feb 17 16:19:18 crc kubenswrapper[4874]: I0217 16:19:18.927019 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bbthf" event={"ID":"1b81504e-be8e-4fbd-a5c6-c48ee4dea72b","Type":"ContainerStarted","Data":"9605cc216e21dadfd3e11cfda8677f48631b4e9cbe824551ca38eed8954fec05"} Feb 17 16:19:18 crc kubenswrapper[4874]: I0217 16:19:18.945744 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-bbthf" podStartSLOduration=3.945726165 podStartE2EDuration="3.945726165s" podCreationTimestamp="2026-02-17 16:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:18.942198547 +0000 UTC m=+969.236587108" watchObservedRunningTime="2026-02-17 16:19:18.945726165 +0000 UTC m=+969.240114716" Feb 17 16:19:19 crc kubenswrapper[4874]: I0217 16:19:19.934623 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-bbthf" Feb 17 16:19:24 crc kubenswrapper[4874]: I0217 16:19:24.979039 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" event={"ID":"abf374ec-8d79-48ac-ac9b-9cf5c81d0adf","Type":"ContainerStarted","Data":"6bd9b909b7fc4f5a07dd80c5e58f2688ee191b17ecfffdb49894cbdf4f6d86f4"} Feb 17 16:19:24 crc kubenswrapper[4874]: I0217 16:19:24.980058 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:24 crc kubenswrapper[4874]: I0217 16:19:24.981440 4874 generic.go:334] "Generic (PLEG): container finished" podID="feb8be07-358f-49c3-a27c-53054e353a5d" containerID="0684194aaa307a59f0bcc28fb623af2fb601f2f644a3f98ed3e1a4e2293adc8e" exitCode=0 Feb 17 16:19:24 crc kubenswrapper[4874]: I0217 16:19:24.981490 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerDied","Data":"0684194aaa307a59f0bcc28fb623af2fb601f2f644a3f98ed3e1a4e2293adc8e"} Feb 17 16:19:25 crc kubenswrapper[4874]: I0217 16:19:25.003825 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" podStartSLOduration=2.632112769 podStartE2EDuration="10.003805484s" podCreationTimestamp="2026-02-17 16:19:15 +0000 UTC" firstStartedPulling="2026-02-17 16:19:17.305229551 +0000 UTC m=+967.599618112" lastFinishedPulling="2026-02-17 16:19:24.676922266 +0000 UTC m=+974.971310827" observedRunningTime="2026-02-17 16:19:24.993929898 +0000 UTC m=+975.288318459" watchObservedRunningTime="2026-02-17 16:19:25.003805484 +0000 UTC m=+975.298194045" Feb 17 16:19:25 crc kubenswrapper[4874]: I0217 16:19:25.990864 4874 generic.go:334] "Generic (PLEG): container finished" podID="feb8be07-358f-49c3-a27c-53054e353a5d" containerID="ee74520dfa9cabb39333a6f90a1959f1a997ebd7e2294bbb8e94ee82ba0cd43b" exitCode=0 Feb 17 16:19:25 crc kubenswrapper[4874]: I0217 16:19:25.990928 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerDied","Data":"ee74520dfa9cabb39333a6f90a1959f1a997ebd7e2294bbb8e94ee82ba0cd43b"} Feb 17 16:19:27 crc kubenswrapper[4874]: I0217 16:19:27.011793 4874 generic.go:334] "Generic (PLEG): container finished" podID="feb8be07-358f-49c3-a27c-53054e353a5d" containerID="f26afb84f79c67a87cef25c7d1a0d9894ae06538f854807fb95c399b8ba2c0b0" exitCode=0 Feb 17 16:19:27 crc kubenswrapper[4874]: I0217 16:19:27.011864 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerDied","Data":"f26afb84f79c67a87cef25c7d1a0d9894ae06538f854807fb95c399b8ba2c0b0"} Feb 17 16:19:27 crc kubenswrapper[4874]: I0217 16:19:27.724988 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:27 crc kubenswrapper[4874]: I0217 16:19:27.725296 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:27 crc kubenswrapper[4874]: I0217 16:19:27.739677 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-bbthf" Feb 17 16:19:28 crc kubenswrapper[4874]: I0217 16:19:28.029152 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerStarted","Data":"9ddb82f12db8938cbbe1bad20860b2269a979728f3f5456786f57bde454171b7"} Feb 17 16:19:28 crc kubenswrapper[4874]: I0217 16:19:28.029206 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerStarted","Data":"c782f44723afe31354098e95e1d6f3aa9ae75c2677d7d3161364d6faa140fc78"} Feb 17 16:19:28 crc kubenswrapper[4874]: I0217 16:19:28.029221 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerStarted","Data":"4aa708dd2bcf880db81f9e065e2b3907d250553d5c79386e5e1c6905c93a2ba1"} Feb 17 16:19:28 crc kubenswrapper[4874]: I0217 16:19:28.029231 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerStarted","Data":"7fe5ea6eef2f44f455cca837ccea63f18f6ab32c891ef5b3395ed7eadbac2f32"} Feb 17 16:19:28 crc kubenswrapper[4874]: I0217 16:19:28.029245 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerStarted","Data":"efd36116a0d7c17a55b1fe55fa15f9fe3d3da1f16f9fce46d7ed65f9c815d6c5"} Feb 17 16:19:29 crc kubenswrapper[4874]: I0217 16:19:29.041252 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4xgq6" event={"ID":"feb8be07-358f-49c3-a27c-53054e353a5d","Type":"ContainerStarted","Data":"15ea69b30b632e38a06e0525de56ddb32974a2cd05d245fafcfa981e7f4f4ff2"} Feb 17 16:19:29 crc kubenswrapper[4874]: I0217 16:19:29.041479 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:29 crc kubenswrapper[4874]: I0217 16:19:29.079136 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4xgq6" podStartSLOduration=6.41327235 podStartE2EDuration="14.079110207s" podCreationTimestamp="2026-02-17 16:19:15 +0000 UTC" firstStartedPulling="2026-02-17 16:19:16.994490376 +0000 UTC m=+967.288878937" lastFinishedPulling="2026-02-17 16:19:24.660328233 +0000 UTC m=+974.954716794" observedRunningTime="2026-02-17 16:19:29.070752929 +0000 UTC m=+979.365141500" watchObservedRunningTime="2026-02-17 16:19:29.079110207 +0000 UTC m=+979.373498808" Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.444957 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vpmrw"] Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.446437 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vpmrw" Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.449014 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.449125 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9stnr" Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.450053 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.493494 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmj5\" (UniqueName: \"kubernetes.io/projected/7c434872-a209-4c1e-9803-886ee7d3173b-kube-api-access-8tmj5\") pod \"openstack-operator-index-vpmrw\" (UID: \"7c434872-a209-4c1e-9803-886ee7d3173b\") " pod="openstack-operators/openstack-operator-index-vpmrw" Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.512730 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vpmrw"] Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.595248 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmj5\" (UniqueName: \"kubernetes.io/projected/7c434872-a209-4c1e-9803-886ee7d3173b-kube-api-access-8tmj5\") pod \"openstack-operator-index-vpmrw\" (UID: \"7c434872-a209-4c1e-9803-886ee7d3173b\") " pod="openstack-operators/openstack-operator-index-vpmrw" Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.612878 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmj5\" (UniqueName: \"kubernetes.io/projected/7c434872-a209-4c1e-9803-886ee7d3173b-kube-api-access-8tmj5\") pod \"openstack-operator-index-vpmrw\" (UID: \"7c434872-a209-4c1e-9803-886ee7d3173b\") " pod="openstack-operators/openstack-operator-index-vpmrw" Feb 17 16:19:30 crc kubenswrapper[4874]: I0217 16:19:30.817825 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vpmrw" Feb 17 16:19:31 crc kubenswrapper[4874]: I0217 16:19:31.279545 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vpmrw"] Feb 17 16:19:31 crc kubenswrapper[4874]: W0217 16:19:31.283113 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c434872_a209_4c1e_9803_886ee7d3173b.slice/crio-2ea43479b420b9ab3f354354a0b06aabac4d25a8b0c5c5503b3816dc555f7104 WatchSource:0}: Error finding container 2ea43479b420b9ab3f354354a0b06aabac4d25a8b0c5c5503b3816dc555f7104: Status 404 returned error can't find the container with id 2ea43479b420b9ab3f354354a0b06aabac4d25a8b0c5c5503b3816dc555f7104 Feb 17 16:19:31 crc kubenswrapper[4874]: I0217 16:19:31.746348 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:31 crc kubenswrapper[4874]: I0217 16:19:31.811027 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:32 crc kubenswrapper[4874]: I0217 16:19:32.069897 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vpmrw" event={"ID":"7c434872-a209-4c1e-9803-886ee7d3173b","Type":"ContainerStarted","Data":"2ea43479b420b9ab3f354354a0b06aabac4d25a8b0c5c5503b3816dc555f7104"} Feb 17 16:19:33 crc kubenswrapper[4874]: I0217 16:19:33.816754 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vpmrw"] Feb 17 16:19:34 crc kubenswrapper[4874]: I0217 16:19:34.421569 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-j424d"] Feb 17 16:19:34 crc kubenswrapper[4874]: I0217 16:19:34.422579 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j424d" Feb 17 16:19:34 crc kubenswrapper[4874]: I0217 16:19:34.462138 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j424d"] Feb 17 16:19:34 crc kubenswrapper[4874]: I0217 16:19:34.482967 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsvmr\" (UniqueName: \"kubernetes.io/projected/0c982d3a-d8b0-44d9-82c2-d031d9e02af9-kube-api-access-bsvmr\") pod \"openstack-operator-index-j424d\" (UID: \"0c982d3a-d8b0-44d9-82c2-d031d9e02af9\") " pod="openstack-operators/openstack-operator-index-j424d" Feb 17 16:19:34 crc kubenswrapper[4874]: I0217 16:19:34.585461 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsvmr\" (UniqueName: \"kubernetes.io/projected/0c982d3a-d8b0-44d9-82c2-d031d9e02af9-kube-api-access-bsvmr\") pod \"openstack-operator-index-j424d\" (UID: \"0c982d3a-d8b0-44d9-82c2-d031d9e02af9\") " pod="openstack-operators/openstack-operator-index-j424d" Feb 17 16:19:34 crc kubenswrapper[4874]: I0217 16:19:34.602486 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsvmr\" (UniqueName: \"kubernetes.io/projected/0c982d3a-d8b0-44d9-82c2-d031d9e02af9-kube-api-access-bsvmr\") pod \"openstack-operator-index-j424d\" (UID: \"0c982d3a-d8b0-44d9-82c2-d031d9e02af9\") " pod="openstack-operators/openstack-operator-index-j424d" Feb 17 16:19:34 crc kubenswrapper[4874]: I0217 16:19:34.743841 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j424d" Feb 17 16:19:35 crc kubenswrapper[4874]: I0217 16:19:35.103319 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vpmrw" event={"ID":"7c434872-a209-4c1e-9803-886ee7d3173b","Type":"ContainerStarted","Data":"aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb"} Feb 17 16:19:35 crc kubenswrapper[4874]: I0217 16:19:35.103564 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vpmrw" podUID="7c434872-a209-4c1e-9803-886ee7d3173b" containerName="registry-server" containerID="cri-o://aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb" gracePeriod=2 Feb 17 16:19:35 crc kubenswrapper[4874]: I0217 16:19:35.120331 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vpmrw" podStartSLOduration=1.4942542589999999 podStartE2EDuration="5.120317072s" podCreationTimestamp="2026-02-17 16:19:30 +0000 UTC" firstStartedPulling="2026-02-17 16:19:31.286614546 +0000 UTC m=+981.581003117" lastFinishedPulling="2026-02-17 16:19:34.912677369 +0000 UTC m=+985.207065930" observedRunningTime="2026-02-17 16:19:35.116705202 +0000 UTC m=+985.411093763" watchObservedRunningTime="2026-02-17 16:19:35.120317072 +0000 UTC m=+985.414705633" Feb 17 16:19:35 crc kubenswrapper[4874]: I0217 16:19:35.315930 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j424d"] Feb 17 16:19:35 crc kubenswrapper[4874]: W0217 16:19:35.338237 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c982d3a_d8b0_44d9_82c2_d031d9e02af9.slice/crio-28baaf4402ff8cee62fbfe75a9475212a6261c0e690eb1e286cfac97fa6d0fdb WatchSource:0}: Error finding container 28baaf4402ff8cee62fbfe75a9475212a6261c0e690eb1e286cfac97fa6d0fdb: Status 404 returned error can't find the container with id 28baaf4402ff8cee62fbfe75a9475212a6261c0e690eb1e286cfac97fa6d0fdb Feb 17 16:19:35 crc kubenswrapper[4874]: I0217 16:19:35.458816 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vpmrw" Feb 17 16:19:35 crc kubenswrapper[4874]: I0217 16:19:35.607258 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tmj5\" (UniqueName: \"kubernetes.io/projected/7c434872-a209-4c1e-9803-886ee7d3173b-kube-api-access-8tmj5\") pod \"7c434872-a209-4c1e-9803-886ee7d3173b\" (UID: \"7c434872-a209-4c1e-9803-886ee7d3173b\") " Feb 17 16:19:35 crc kubenswrapper[4874]: I0217 16:19:35.613196 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c434872-a209-4c1e-9803-886ee7d3173b-kube-api-access-8tmj5" (OuterVolumeSpecName: "kube-api-access-8tmj5") pod "7c434872-a209-4c1e-9803-886ee7d3173b" (UID: "7c434872-a209-4c1e-9803-886ee7d3173b"). InnerVolumeSpecName "kube-api-access-8tmj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:35 crc kubenswrapper[4874]: I0217 16:19:35.709545 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tmj5\" (UniqueName: \"kubernetes.io/projected/7c434872-a209-4c1e-9803-886ee7d3173b-kube-api-access-8tmj5\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.118764 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c434872-a209-4c1e-9803-886ee7d3173b" containerID="aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb" exitCode=0 Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.118808 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vpmrw" event={"ID":"7c434872-a209-4c1e-9803-886ee7d3173b","Type":"ContainerDied","Data":"aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb"} Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.118852 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vpmrw" event={"ID":"7c434872-a209-4c1e-9803-886ee7d3173b","Type":"ContainerDied","Data":"2ea43479b420b9ab3f354354a0b06aabac4d25a8b0c5c5503b3816dc555f7104"} Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.118882 4874 scope.go:117] "RemoveContainer" containerID="aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb" Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.119563 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vpmrw" Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.121322 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j424d" event={"ID":"0c982d3a-d8b0-44d9-82c2-d031d9e02af9","Type":"ContainerStarted","Data":"ced182749b62faf22712fb1bd45ce11a55eca0564c0b0d477c69eb917f4b0f52"} Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.121353 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j424d" event={"ID":"0c982d3a-d8b0-44d9-82c2-d031d9e02af9","Type":"ContainerStarted","Data":"28baaf4402ff8cee62fbfe75a9475212a6261c0e690eb1e286cfac97fa6d0fdb"} Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.149825 4874 scope.go:117] "RemoveContainer" containerID="aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb" Feb 17 16:19:36 crc kubenswrapper[4874]: E0217 16:19:36.153723 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb\": container with ID starting with aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb not found: ID does not exist" containerID="aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb" Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.153754 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb"} err="failed to get container status \"aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb\": rpc error: code = NotFound desc = could not find container \"aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb\": container with ID starting with aebb3af42bd13534f31b33d02de4e44b3ceb8094d8300cfa063f6d44e20ffdbb not found: ID does not exist" Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.156394 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-j424d" podStartSLOduration=2.076312852 podStartE2EDuration="2.156372323s" podCreationTimestamp="2026-02-17 16:19:34 +0000 UTC" firstStartedPulling="2026-02-17 16:19:35.343290546 +0000 UTC m=+985.637679107" lastFinishedPulling="2026-02-17 16:19:35.423350017 +0000 UTC m=+985.717738578" observedRunningTime="2026-02-17 16:19:36.1429732 +0000 UTC m=+986.437361761" watchObservedRunningTime="2026-02-17 16:19:36.156372323 +0000 UTC m=+986.450760884" Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.170435 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vpmrw"] Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.176692 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vpmrw"] Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.465971 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c434872-a209-4c1e-9803-886ee7d3173b" path="/var/lib/kubelet/pods/7c434872-a209-4c1e-9803-886ee7d3173b/volumes" Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.763046 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-6pq7d" Feb 17 16:19:36 crc kubenswrapper[4874]: I0217 16:19:36.857050 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-n9vxs" Feb 17 16:19:44 crc kubenswrapper[4874]: I0217 16:19:44.744872 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-j424d" Feb 17 16:19:44 crc kubenswrapper[4874]: I0217 16:19:44.745519 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-j424d" Feb 17 16:19:44 crc kubenswrapper[4874]: I0217 16:19:44.805006 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-j424d" Feb 17 16:19:45 crc kubenswrapper[4874]: I0217 16:19:45.225764 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-j424d" Feb 17 16:19:46 crc kubenswrapper[4874]: I0217 16:19:46.749278 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4xgq6" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.597597 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m"] Feb 17 16:19:51 crc kubenswrapper[4874]: E0217 16:19:51.598591 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c434872-a209-4c1e-9803-886ee7d3173b" containerName="registry-server" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.598610 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c434872-a209-4c1e-9803-886ee7d3173b" containerName="registry-server" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.598848 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c434872-a209-4c1e-9803-886ee7d3173b" containerName="registry-server" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.604142 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.606095 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m"] Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.606885 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-h4bwv" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.685183 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-bundle\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.685244 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-util\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.685322 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm7fk\" (UniqueName: \"kubernetes.io/projected/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-kube-api-access-vm7fk\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.786834 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-bundle\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.786900 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-util\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.786991 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm7fk\" (UniqueName: \"kubernetes.io/projected/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-kube-api-access-vm7fk\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.787682 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-bundle\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.787694 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-util\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.806352 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm7fk\" (UniqueName: \"kubernetes.io/projected/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-kube-api-access-vm7fk\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:51 crc kubenswrapper[4874]: I0217 16:19:51.929556 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:52 crc kubenswrapper[4874]: I0217 16:19:52.401838 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m"] Feb 17 16:19:52 crc kubenswrapper[4874]: W0217 16:19:52.404670 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8622c37_b6c8_4b87_a9b6_30e7ee12af20.slice/crio-f4abbec6841edcf73936133b131cd841108326e9c02c28d972876776aa30b7cb WatchSource:0}: Error finding container f4abbec6841edcf73936133b131cd841108326e9c02c28d972876776aa30b7cb: Status 404 returned error can't find the container with id f4abbec6841edcf73936133b131cd841108326e9c02c28d972876776aa30b7cb Feb 17 16:19:53 crc kubenswrapper[4874]: I0217 16:19:53.270540 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" event={"ID":"d8622c37-b6c8-4b87-a9b6-30e7ee12af20","Type":"ContainerStarted","Data":"f4abbec6841edcf73936133b131cd841108326e9c02c28d972876776aa30b7cb"} Feb 17 16:19:54 crc kubenswrapper[4874]: I0217 16:19:54.285015 4874 generic.go:334] "Generic (PLEG): container finished" podID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerID="532e50d507bf3e808ae4a1bbf99d3954046fdd11f05e7a01e16502fa1a39c92c" exitCode=0 Feb 17 16:19:54 crc kubenswrapper[4874]: I0217 16:19:54.285128 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" event={"ID":"d8622c37-b6c8-4b87-a9b6-30e7ee12af20","Type":"ContainerDied","Data":"532e50d507bf3e808ae4a1bbf99d3954046fdd11f05e7a01e16502fa1a39c92c"} Feb 17 16:19:55 crc kubenswrapper[4874]: I0217 16:19:55.297934 4874 generic.go:334] "Generic (PLEG): container finished" podID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerID="cb3dac630d03a713cfe74016b61236669aa50bf2a4b1746320138b30ee4d29fb" exitCode=0 Feb 17 16:19:55 crc kubenswrapper[4874]: I0217 16:19:55.298057 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" event={"ID":"d8622c37-b6c8-4b87-a9b6-30e7ee12af20","Type":"ContainerDied","Data":"cb3dac630d03a713cfe74016b61236669aa50bf2a4b1746320138b30ee4d29fb"} Feb 17 16:19:56 crc kubenswrapper[4874]: I0217 16:19:56.311351 4874 generic.go:334] "Generic (PLEG): container finished" podID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerID="f29ba50dc347c7790d23565a9fe4f22cb0fe479d295d82e2d8d09bb1ab921717" exitCode=0 Feb 17 16:19:56 crc kubenswrapper[4874]: I0217 16:19:56.311419 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" event={"ID":"d8622c37-b6c8-4b87-a9b6-30e7ee12af20","Type":"ContainerDied","Data":"f29ba50dc347c7790d23565a9fe4f22cb0fe479d295d82e2d8d09bb1ab921717"} Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.674035 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.725605 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.725652 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.793030 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-util\") pod \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.793450 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-bundle\") pod \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.793683 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm7fk\" (UniqueName: \"kubernetes.io/projected/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-kube-api-access-vm7fk\") pod \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\" (UID: \"d8622c37-b6c8-4b87-a9b6-30e7ee12af20\") " Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.794102 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-bundle" (OuterVolumeSpecName: "bundle") pod "d8622c37-b6c8-4b87-a9b6-30e7ee12af20" (UID: "d8622c37-b6c8-4b87-a9b6-30e7ee12af20"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.794326 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.798913 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-kube-api-access-vm7fk" (OuterVolumeSpecName: "kube-api-access-vm7fk") pod "d8622c37-b6c8-4b87-a9b6-30e7ee12af20" (UID: "d8622c37-b6c8-4b87-a9b6-30e7ee12af20"). InnerVolumeSpecName "kube-api-access-vm7fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.817406 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-util" (OuterVolumeSpecName: "util") pod "d8622c37-b6c8-4b87-a9b6-30e7ee12af20" (UID: "d8622c37-b6c8-4b87-a9b6-30e7ee12af20"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.896200 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm7fk\" (UniqueName: \"kubernetes.io/projected/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-kube-api-access-vm7fk\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4874]: I0217 16:19:57.896251 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8622c37-b6c8-4b87-a9b6-30e7ee12af20-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:58 crc kubenswrapper[4874]: I0217 16:19:58.329840 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" event={"ID":"d8622c37-b6c8-4b87-a9b6-30e7ee12af20","Type":"ContainerDied","Data":"f4abbec6841edcf73936133b131cd841108326e9c02c28d972876776aa30b7cb"} Feb 17 16:19:58 crc kubenswrapper[4874]: I0217 16:19:58.329877 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4abbec6841edcf73936133b131cd841108326e9c02c28d972876776aa30b7cb" Feb 17 16:19:58 crc kubenswrapper[4874]: I0217 16:19:58.329945 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.001071 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8"] Feb 17 16:20:04 crc kubenswrapper[4874]: E0217 16:20:04.001984 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerName="util" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.002000 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerName="util" Feb 17 16:20:04 crc kubenswrapper[4874]: E0217 16:20:04.002032 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerName="extract" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.002041 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerName="extract" Feb 17 16:20:04 crc kubenswrapper[4874]: E0217 16:20:04.002062 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerName="pull" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.002072 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerName="pull" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.002353 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8622c37-b6c8-4b87-a9b6-30e7ee12af20" containerName="extract" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.003150 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.006220 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zqqr2" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.019795 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8"] Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.048693 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4r7g\" (UniqueName: \"kubernetes.io/projected/c19a7a72-ad6e-499e-ba9e-2b58b8ca2241-kube-api-access-g4r7g\") pod \"openstack-operator-controller-init-5b4d8b9dd-d9wb8\" (UID: \"c19a7a72-ad6e-499e-ba9e-2b58b8ca2241\") " pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.149620 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4r7g\" (UniqueName: \"kubernetes.io/projected/c19a7a72-ad6e-499e-ba9e-2b58b8ca2241-kube-api-access-g4r7g\") pod \"openstack-operator-controller-init-5b4d8b9dd-d9wb8\" (UID: \"c19a7a72-ad6e-499e-ba9e-2b58b8ca2241\") " pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.167127 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4r7g\" (UniqueName: \"kubernetes.io/projected/c19a7a72-ad6e-499e-ba9e-2b58b8ca2241-kube-api-access-g4r7g\") pod \"openstack-operator-controller-init-5b4d8b9dd-d9wb8\" (UID: \"c19a7a72-ad6e-499e-ba9e-2b58b8ca2241\") " pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.322995 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" Feb 17 16:20:04 crc kubenswrapper[4874]: I0217 16:20:04.763113 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8"] Feb 17 16:20:05 crc kubenswrapper[4874]: I0217 16:20:05.441594 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" event={"ID":"c19a7a72-ad6e-499e-ba9e-2b58b8ca2241","Type":"ContainerStarted","Data":"85de7b4166d6dbdff2f37b3d4da5fadce2a621217705027bbc9e754e2282e06b"} Feb 17 16:20:09 crc kubenswrapper[4874]: I0217 16:20:09.481800 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" event={"ID":"c19a7a72-ad6e-499e-ba9e-2b58b8ca2241","Type":"ContainerStarted","Data":"a1e4f0b227ed0be6cfe8b4b7e46ecd18779c243c83abe7d32cc029283773f1f7"} Feb 17 16:20:09 crc kubenswrapper[4874]: I0217 16:20:09.482596 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" Feb 17 16:20:09 crc kubenswrapper[4874]: I0217 16:20:09.534932 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" podStartSLOduration=2.631183796 podStartE2EDuration="6.534901362s" podCreationTimestamp="2026-02-17 16:20:03 +0000 UTC" firstStartedPulling="2026-02-17 16:20:04.780030782 +0000 UTC m=+1015.074419363" lastFinishedPulling="2026-02-17 16:20:08.683748368 +0000 UTC m=+1018.978136929" observedRunningTime="2026-02-17 16:20:09.522287598 +0000 UTC m=+1019.816676229" watchObservedRunningTime="2026-02-17 16:20:09.534901362 +0000 UTC m=+1019.829289963" Feb 17 16:20:14 crc kubenswrapper[4874]: I0217 16:20:14.326204 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-d9wb8" Feb 17 16:20:27 crc kubenswrapper[4874]: I0217 16:20:27.724604 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:20:27 crc kubenswrapper[4874]: I0217 16:20:27.725199 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:20:27 crc kubenswrapper[4874]: I0217 16:20:27.725248 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:20:27 crc kubenswrapper[4874]: I0217 16:20:27.725967 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14d9fa4df39b7f49ac05cd101e3f5c3bf6c474d638afdc060d235e7fd1103377"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:20:27 crc kubenswrapper[4874]: I0217 16:20:27.726157 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://14d9fa4df39b7f49ac05cd101e3f5c3bf6c474d638afdc060d235e7fd1103377" gracePeriod=600 Feb 17 16:20:28 crc kubenswrapper[4874]: I0217 16:20:28.666970 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="14d9fa4df39b7f49ac05cd101e3f5c3bf6c474d638afdc060d235e7fd1103377" exitCode=0 Feb 17 16:20:28 crc kubenswrapper[4874]: I0217 16:20:28.667025 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"14d9fa4df39b7f49ac05cd101e3f5c3bf6c474d638afdc060d235e7fd1103377"} Feb 17 16:20:28 crc kubenswrapper[4874]: I0217 16:20:28.667055 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"e095e9b56aac8ea173cabbfa2b9b7d5f89bdf527eea23b78b0ba4ca194b5eb6c"} Feb 17 16:20:28 crc kubenswrapper[4874]: I0217 16:20:28.667091 4874 scope.go:117] "RemoveContainer" containerID="5054786a168a12e52b8d968ed3ece839ff7c3185d6be0ffc79a31e785b1ebbdf" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.512048 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.514152 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.518975 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.519957 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.523539 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wwsc2" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.523902 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dr5zm" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.529125 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.603264 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.611560 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.612508 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.617184 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7q2\" (UniqueName: \"kubernetes.io/projected/db6537c6-cc88-4848-a428-ad573290cc02-kube-api-access-pt7q2\") pod \"cinder-operator-controller-manager-5d946d989d-xgkkx\" (UID: \"db6537c6-cc88-4848-a428-ad573290cc02\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.617305 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dst66\" (UniqueName: \"kubernetes.io/projected/c4c6b874-8781-4030-a651-54feaeed2634-kube-api-access-dst66\") pod \"barbican-operator-controller-manager-868647ff47-gxjgl\" (UID: \"c4c6b874-8781-4030-a651-54feaeed2634\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.618094 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6kdwc" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.637175 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.667142 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.668180 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.670573 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-mtp7x" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.686199 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.697602 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.698724 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.710259 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-5j9d4" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.723552 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9d48\" (UniqueName: \"kubernetes.io/projected/aa81f594-f3c2-43d6-ac9b-6a51e36e8d99-kube-api-access-m9d48\") pod \"glance-operator-controller-manager-77987464f4-w7lcj\" (UID: \"aa81f594-f3c2-43d6-ac9b-6a51e36e8d99\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.723632 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt7q2\" (UniqueName: \"kubernetes.io/projected/db6537c6-cc88-4848-a428-ad573290cc02-kube-api-access-pt7q2\") pod \"cinder-operator-controller-manager-5d946d989d-xgkkx\" (UID: \"db6537c6-cc88-4848-a428-ad573290cc02\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.723762 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjrg2\" (UniqueName: \"kubernetes.io/projected/3f567ee8-98ac-44f3-bba2-4dfd8b514ab2-kube-api-access-cjrg2\") pod \"designate-operator-controller-manager-6d8bf5c495-jn9cr\" (UID: \"3f567ee8-98ac-44f3-bba2-4dfd8b514ab2\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.723805 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dst66\" (UniqueName: \"kubernetes.io/projected/c4c6b874-8781-4030-a651-54feaeed2634-kube-api-access-dst66\") pod \"barbican-operator-controller-manager-868647ff47-gxjgl\" (UID: \"c4c6b874-8781-4030-a651-54feaeed2634\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.728145 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.729447 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.746850 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qb5p8" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.765951 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.774110 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dst66\" (UniqueName: \"kubernetes.io/projected/c4c6b874-8781-4030-a651-54feaeed2634-kube-api-access-dst66\") pod \"barbican-operator-controller-manager-868647ff47-gxjgl\" (UID: \"c4c6b874-8781-4030-a651-54feaeed2634\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.799792 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt7q2\" (UniqueName: \"kubernetes.io/projected/db6537c6-cc88-4848-a428-ad573290cc02-kube-api-access-pt7q2\") pod \"cinder-operator-controller-manager-5d946d989d-xgkkx\" (UID: \"db6537c6-cc88-4848-a428-ad573290cc02\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.799871 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.829935 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjrg2\" (UniqueName: \"kubernetes.io/projected/3f567ee8-98ac-44f3-bba2-4dfd8b514ab2-kube-api-access-cjrg2\") pod \"designate-operator-controller-manager-6d8bf5c495-jn9cr\" (UID: \"3f567ee8-98ac-44f3-bba2-4dfd8b514ab2\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.830004 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h99k\" (UniqueName: \"kubernetes.io/projected/6873354d-473a-4bf1-b8d3-f728e268bd36-kube-api-access-5h99k\") pod \"heat-operator-controller-manager-69f49c598c-qtzmv\" (UID: \"6873354d-473a-4bf1-b8d3-f728e268bd36\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.830068 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgnz\" (UniqueName: \"kubernetes.io/projected/fd0e6a7f-7fe4-4790-a3a8-d973386bec13-kube-api-access-gvgnz\") pod \"horizon-operator-controller-manager-5b9b8895d5-87l78\" (UID: \"fd0e6a7f-7fe4-4790-a3a8-d973386bec13\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.830117 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9d48\" (UniqueName: \"kubernetes.io/projected/aa81f594-f3c2-43d6-ac9b-6a51e36e8d99-kube-api-access-m9d48\") pod \"glance-operator-controller-manager-77987464f4-w7lcj\" (UID: \"aa81f594-f3c2-43d6-ac9b-6a51e36e8d99\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.830590 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qldzr"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.860979 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.869234 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.869725 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.871881 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vmfdv" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.890139 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.891168 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.895420 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mt7jq" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.898795 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9d48\" (UniqueName: \"kubernetes.io/projected/aa81f594-f3c2-43d6-ac9b-6a51e36e8d99-kube-api-access-m9d48\") pod \"glance-operator-controller-manager-77987464f4-w7lcj\" (UID: \"aa81f594-f3c2-43d6-ac9b-6a51e36e8d99\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.911838 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.921971 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qldzr"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.936096 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v9bx\" (UniqueName: \"kubernetes.io/projected/b87e1102-63f8-4f2f-9376-dab7745fb4b2-kube-api-access-2v9bx\") pod \"ironic-operator-controller-manager-554564d7fc-lv9qv\" (UID: \"b87e1102-63f8-4f2f-9376-dab7745fb4b2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.936184 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h99k\" (UniqueName: \"kubernetes.io/projected/6873354d-473a-4bf1-b8d3-f728e268bd36-kube-api-access-5h99k\") pod \"heat-operator-controller-manager-69f49c598c-qtzmv\" (UID: \"6873354d-473a-4bf1-b8d3-f728e268bd36\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.936232 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqzg4\" (UniqueName: \"kubernetes.io/projected/1127a6be-ce6c-498b-bd8c-7a131b575321-kube-api-access-jqzg4\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.936286 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.936328 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvgnz\" (UniqueName: \"kubernetes.io/projected/fd0e6a7f-7fe4-4790-a3a8-d973386bec13-kube-api-access-gvgnz\") pod \"horizon-operator-controller-manager-5b9b8895d5-87l78\" (UID: \"fd0e6a7f-7fe4-4790-a3a8-d973386bec13\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.955169 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh"] Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.956157 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.966940 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjrg2\" (UniqueName: \"kubernetes.io/projected/3f567ee8-98ac-44f3-bba2-4dfd8b514ab2-kube-api-access-cjrg2\") pod \"designate-operator-controller-manager-6d8bf5c495-jn9cr\" (UID: \"3f567ee8-98ac-44f3-bba2-4dfd8b514ab2\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" Feb 17 16:20:52 crc kubenswrapper[4874]: I0217 16:20:52.974253 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x6drc" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:52.996871 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h99k\" (UniqueName: \"kubernetes.io/projected/6873354d-473a-4bf1-b8d3-f728e268bd36-kube-api-access-5h99k\") pod \"heat-operator-controller-manager-69f49c598c-qtzmv\" (UID: \"6873354d-473a-4bf1-b8d3-f728e268bd36\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.006412 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.012793 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.014664 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvgnz\" (UniqueName: \"kubernetes.io/projected/fd0e6a7f-7fe4-4790-a3a8-d973386bec13-kube-api-access-gvgnz\") pod \"horizon-operator-controller-manager-5b9b8895d5-87l78\" (UID: \"fd0e6a7f-7fe4-4790-a3a8-d973386bec13\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.026797 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.027926 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.033840 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-b2wbs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.051247 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqzg4\" (UniqueName: \"kubernetes.io/projected/1127a6be-ce6c-498b-bd8c-7a131b575321-kube-api-access-jqzg4\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.051302 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.051424 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjzp\" (UniqueName: \"kubernetes.io/projected/95fa7fde-cb3d-4b2d-ac02-f58440c35c7b-kube-api-access-6mjzp\") pod \"keystone-operator-controller-manager-b4d948c87-l28fh\" (UID: \"95fa7fde-cb3d-4b2d-ac02-f58440c35c7b\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.051463 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v9bx\" (UniqueName: \"kubernetes.io/projected/b87e1102-63f8-4f2f-9376-dab7745fb4b2-kube-api-access-2v9bx\") pod \"ironic-operator-controller-manager-554564d7fc-lv9qv\" (UID: \"b87e1102-63f8-4f2f-9376-dab7745fb4b2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.051961 4874 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.052007 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert podName:1127a6be-ce6c-498b-bd8c-7a131b575321 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:53.551991046 +0000 UTC m=+1063.846379607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert") pod "infra-operator-controller-manager-79d975b745-qldzr" (UID: "1127a6be-ce6c-498b-bd8c-7a131b575321") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.063936 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.069498 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.070816 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.089236 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.089334 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.098847 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pf5rk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.119164 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.133688 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqzg4\" (UniqueName: \"kubernetes.io/projected/1127a6be-ce6c-498b-bd8c-7a131b575321-kube-api-access-jqzg4\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.133717 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v9bx\" (UniqueName: \"kubernetes.io/projected/b87e1102-63f8-4f2f-9376-dab7745fb4b2-kube-api-access-2v9bx\") pod \"ironic-operator-controller-manager-554564d7fc-lv9qv\" (UID: \"b87e1102-63f8-4f2f-9376-dab7745fb4b2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.136441 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.165699 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.166038 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ndm2\" (UniqueName: \"kubernetes.io/projected/25da1eba-df74-4c90-90be-bb79065c4557-kube-api-access-9ndm2\") pod \"mariadb-operator-controller-manager-6994f66f48-dbzhs\" (UID: \"25da1eba-df74-4c90-90be-bb79065c4557\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.166071 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjzp\" (UniqueName: \"kubernetes.io/projected/95fa7fde-cb3d-4b2d-ac02-f58440c35c7b-kube-api-access-6mjzp\") pod \"keystone-operator-controller-manager-b4d948c87-l28fh\" (UID: \"95fa7fde-cb3d-4b2d-ac02-f58440c35c7b\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.166222 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm2v5\" (UniqueName: \"kubernetes.io/projected/62899d98-d8f9-4669-90f1-d4e9e02280aa-kube-api-access-mm2v5\") pod \"manila-operator-controller-manager-54f6768c69-tzsfx\" (UID: \"62899d98-d8f9-4669-90f1-d4e9e02280aa\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.172480 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.196470 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjzp\" (UniqueName: \"kubernetes.io/projected/95fa7fde-cb3d-4b2d-ac02-f58440c35c7b-kube-api-access-6mjzp\") pod \"keystone-operator-controller-manager-b4d948c87-l28fh\" (UID: \"95fa7fde-cb3d-4b2d-ac02-f58440c35c7b\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.263355 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.281337 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.287062 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9zx7m" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.292924 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm2v5\" (UniqueName: \"kubernetes.io/projected/62899d98-d8f9-4669-90f1-d4e9e02280aa-kube-api-access-mm2v5\") pod \"manila-operator-controller-manager-54f6768c69-tzsfx\" (UID: \"62899d98-d8f9-4669-90f1-d4e9e02280aa\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.293860 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ndm2\" (UniqueName: \"kubernetes.io/projected/25da1eba-df74-4c90-90be-bb79065c4557-kube-api-access-9ndm2\") pod \"mariadb-operator-controller-manager-6994f66f48-dbzhs\" (UID: \"25da1eba-df74-4c90-90be-bb79065c4557\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.308808 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.371250 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-75swn"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.372784 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.378440 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-t8vwr" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.385507 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ndm2\" (UniqueName: \"kubernetes.io/projected/25da1eba-df74-4c90-90be-bb79065c4557-kube-api-access-9ndm2\") pod \"mariadb-operator-controller-manager-6994f66f48-dbzhs\" (UID: \"25da1eba-df74-4c90-90be-bb79065c4557\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.386341 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm2v5\" (UniqueName: \"kubernetes.io/projected/62899d98-d8f9-4669-90f1-d4e9e02280aa-kube-api-access-mm2v5\") pod \"manila-operator-controller-manager-54f6768c69-tzsfx\" (UID: \"62899d98-d8f9-4669-90f1-d4e9e02280aa\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.401837 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fckd\" (UniqueName: \"kubernetes.io/projected/3603fb35-facf-4a38-8fa1-ce1efa386258-kube-api-access-7fckd\") pod \"neutron-operator-controller-manager-64ddbf8bb-f5m2c\" (UID: \"3603fb35-facf-4a38-8fa1-ce1efa386258\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.414160 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.415375 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.422510 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.422663 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-j4k49" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.437146 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.449527 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.453837 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-75swn"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.458822 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5qxnj" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.481741 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.483909 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.488408 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.489540 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.495925 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l9f6g" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.501133 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.518205 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2n6j\" (UniqueName: \"kubernetes.io/projected/f9447f8b-df93-499d-87cd-4ccb1894c291-kube-api-access-m2n6j\") pod \"octavia-operator-controller-manager-69f8888797-75swn\" (UID: \"f9447f8b-df93-499d-87cd-4ccb1894c291\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.518299 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fckd\" (UniqueName: \"kubernetes.io/projected/3603fb35-facf-4a38-8fa1-ce1efa386258-kube-api-access-7fckd\") pod \"neutron-operator-controller-manager-64ddbf8bb-f5m2c\" (UID: \"3603fb35-facf-4a38-8fa1-ce1efa386258\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.518377 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.519985 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnc6c\" (UniqueName: \"kubernetes.io/projected/9fdb9bed-5948-4441-a15b-34df4351b88c-kube-api-access-pnc6c\") pod \"nova-operator-controller-manager-567668f5cf-jgrrk\" (UID: \"9fdb9bed-5948-4441-a15b-34df4351b88c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.520018 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx9z9\" (UniqueName: \"kubernetes.io/projected/01ab2d32-b155-4460-ace9-60d38242218b-kube-api-access-nx9z9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.525201 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.527129 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.537188 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-hk542"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.538257 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.540008 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fckd\" (UniqueName: \"kubernetes.io/projected/3603fb35-facf-4a38-8fa1-ce1efa386258-kube-api-access-7fckd\") pod \"neutron-operator-controller-manager-64ddbf8bb-f5m2c\" (UID: \"3603fb35-facf-4a38-8fa1-ce1efa386258\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.552734 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qnzch" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.559123 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.560283 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.564703 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-pclz9" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.576706 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.583487 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.586335 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.589969 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jhtr6" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.596866 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-hk542"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.610012 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.621800 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjdwf\" (UniqueName: \"kubernetes.io/projected/e8e4298f-581a-4fdf-8347-088b955fb6ba-kube-api-access-sjdwf\") pod \"ovn-operator-controller-manager-d44cf6b75-jz5hv\" (UID: \"e8e4298f-581a-4fdf-8347-088b955fb6ba\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.621870 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2n6j\" (UniqueName: \"kubernetes.io/projected/f9447f8b-df93-499d-87cd-4ccb1894c291-kube-api-access-m2n6j\") pod \"octavia-operator-controller-manager-69f8888797-75swn\" (UID: \"f9447f8b-df93-499d-87cd-4ccb1894c291\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.621930 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.621947 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64d2n\" (UniqueName: \"kubernetes.io/projected/2ea7f298-dafe-4448-8ffe-a2194f127c12-kube-api-access-64d2n\") pod \"placement-operator-controller-manager-8497b45c89-qhh9h\" (UID: \"2ea7f298-dafe-4448-8ffe-a2194f127c12\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.622034 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.622068 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnc6c\" (UniqueName: \"kubernetes.io/projected/9fdb9bed-5948-4441-a15b-34df4351b88c-kube-api-access-pnc6c\") pod \"nova-operator-controller-manager-567668f5cf-jgrrk\" (UID: \"9fdb9bed-5948-4441-a15b-34df4351b88c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.622103 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx9z9\" (UniqueName: \"kubernetes.io/projected/01ab2d32-b155-4460-ace9-60d38242218b-kube-api-access-nx9z9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.622138 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88ll\" (UniqueName: \"kubernetes.io/projected/73bebada-8e5b-4539-b609-2b64e42fdc35-kube-api-access-b88ll\") pod \"swift-operator-controller-manager-68f46476f-hk542\" (UID: \"73bebada-8e5b-4539-b609-2b64e42fdc35\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.622282 4874 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.622324 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert podName:01ab2d32-b155-4460-ace9-60d38242218b nodeName:}" failed. No retries permitted until 2026-02-17 16:20:54.122307666 +0000 UTC m=+1064.416696227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" (UID: "01ab2d32-b155-4460-ace9-60d38242218b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.622707 4874 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.622757 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert podName:1127a6be-ce6c-498b-bd8c-7a131b575321 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:54.622740277 +0000 UTC m=+1064.917128838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert") pod "infra-operator-controller-manager-79d975b745-qldzr" (UID: "1127a6be-ce6c-498b-bd8c-7a131b575321") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.622893 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.630555 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-wq4gk"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.631963 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.638054 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.640069 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-tsn7n" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.645921 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnc6c\" (UniqueName: \"kubernetes.io/projected/9fdb9bed-5948-4441-a15b-34df4351b88c-kube-api-access-pnc6c\") pod \"nova-operator-controller-manager-567668f5cf-jgrrk\" (UID: \"9fdb9bed-5948-4441-a15b-34df4351b88c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.647262 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.661031 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qbn4k" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.665257 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx9z9\" (UniqueName: \"kubernetes.io/projected/01ab2d32-b155-4460-ace9-60d38242218b-kube-api-access-nx9z9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.667695 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2n6j\" (UniqueName: \"kubernetes.io/projected/f9447f8b-df93-499d-87cd-4ccb1894c291-kube-api-access-m2n6j\") pod \"octavia-operator-controller-manager-69f8888797-75swn\" (UID: \"f9447f8b-df93-499d-87cd-4ccb1894c291\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.683277 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.711745 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-wq4gk"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.723548 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.725389 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw5tc\" (UniqueName: \"kubernetes.io/projected/bd668570-bbe9-4494-a20d-fd49f91dc656-kube-api-access-lw5tc\") pod \"test-operator-controller-manager-7866795846-wq4gk\" (UID: \"bd668570-bbe9-4494-a20d-fd49f91dc656\") " pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.725458 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng9tn\" (UniqueName: \"kubernetes.io/projected/e9edd0a5-e9e7-4604-83e9-466212623115-kube-api-access-ng9tn\") pod \"telemetry-operator-controller-manager-5d7c6cd576-cm8t8\" (UID: \"e9edd0a5-e9e7-4604-83e9-466212623115\") " pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.725536 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b88ll\" (UniqueName: \"kubernetes.io/projected/73bebada-8e5b-4539-b609-2b64e42fdc35-kube-api-access-b88ll\") pod \"swift-operator-controller-manager-68f46476f-hk542\" (UID: \"73bebada-8e5b-4539-b609-2b64e42fdc35\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.725575 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nkts\" (UniqueName: \"kubernetes.io/projected/005d51f3-7446-454e-81ae-3cc46edc3aec-kube-api-access-4nkts\") pod \"watcher-operator-controller-manager-5db88f68c-m7xvs\" (UID: \"005d51f3-7446-454e-81ae-3cc46edc3aec\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.725598 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjdwf\" (UniqueName: \"kubernetes.io/projected/e8e4298f-581a-4fdf-8347-088b955fb6ba-kube-api-access-sjdwf\") pod \"ovn-operator-controller-manager-d44cf6b75-jz5hv\" (UID: \"e8e4298f-581a-4fdf-8347-088b955fb6ba\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.725646 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64d2n\" (UniqueName: \"kubernetes.io/projected/2ea7f298-dafe-4448-8ffe-a2194f127c12-kube-api-access-64d2n\") pod \"placement-operator-controller-manager-8497b45c89-qhh9h\" (UID: \"2ea7f298-dafe-4448-8ffe-a2194f127c12\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.728201 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.760909 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64d2n\" (UniqueName: \"kubernetes.io/projected/2ea7f298-dafe-4448-8ffe-a2194f127c12-kube-api-access-64d2n\") pod \"placement-operator-controller-manager-8497b45c89-qhh9h\" (UID: \"2ea7f298-dafe-4448-8ffe-a2194f127c12\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.768592 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjdwf\" (UniqueName: \"kubernetes.io/projected/e8e4298f-581a-4fdf-8347-088b955fb6ba-kube-api-access-sjdwf\") pod \"ovn-operator-controller-manager-d44cf6b75-jz5hv\" (UID: \"e8e4298f-581a-4fdf-8347-088b955fb6ba\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.769219 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b88ll\" (UniqueName: \"kubernetes.io/projected/73bebada-8e5b-4539-b609-2b64e42fdc35-kube-api-access-b88ll\") pod \"swift-operator-controller-manager-68f46476f-hk542\" (UID: \"73bebada-8e5b-4539-b609-2b64e42fdc35\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.781954 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.785812 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.798551 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.798711 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9cjn5" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.798977 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.827294 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw5tc\" (UniqueName: \"kubernetes.io/projected/bd668570-bbe9-4494-a20d-fd49f91dc656-kube-api-access-lw5tc\") pod \"test-operator-controller-manager-7866795846-wq4gk\" (UID: \"bd668570-bbe9-4494-a20d-fd49f91dc656\") " pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.827353 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.827393 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng9tn\" (UniqueName: \"kubernetes.io/projected/e9edd0a5-e9e7-4604-83e9-466212623115-kube-api-access-ng9tn\") pod \"telemetry-operator-controller-manager-5d7c6cd576-cm8t8\" (UID: \"e9edd0a5-e9e7-4604-83e9-466212623115\") " pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.827467 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z894\" (UniqueName: \"kubernetes.io/projected/bb7619d6-0f36-44aa-82f3-5375a806ae94-kube-api-access-9z894\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.827491 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nkts\" (UniqueName: \"kubernetes.io/projected/005d51f3-7446-454e-81ae-3cc46edc3aec-kube-api-access-4nkts\") pod \"watcher-operator-controller-manager-5db88f68c-m7xvs\" (UID: \"005d51f3-7446-454e-81ae-3cc46edc3aec\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.827525 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.830478 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.835891 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.852194 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.854540 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nkts\" (UniqueName: \"kubernetes.io/projected/005d51f3-7446-454e-81ae-3cc46edc3aec-kube-api-access-4nkts\") pod \"watcher-operator-controller-manager-5db88f68c-m7xvs\" (UID: \"005d51f3-7446-454e-81ae-3cc46edc3aec\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.855924 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.857908 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.863455 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-skz9d" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.865314 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng9tn\" (UniqueName: \"kubernetes.io/projected/e9edd0a5-e9e7-4604-83e9-466212623115-kube-api-access-ng9tn\") pod \"telemetry-operator-controller-manager-5d7c6cd576-cm8t8\" (UID: \"e9edd0a5-e9e7-4604-83e9-466212623115\") " pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.870650 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw5tc\" (UniqueName: \"kubernetes.io/projected/bd668570-bbe9-4494-a20d-fd49f91dc656-kube-api-access-lw5tc\") pod \"test-operator-controller-manager-7866795846-wq4gk\" (UID: \"bd668570-bbe9-4494-a20d-fd49f91dc656\") " pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.878568 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc"] Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.894837 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.936046 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.936170 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.936259 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z894\" (UniqueName: \"kubernetes.io/projected/bb7619d6-0f36-44aa-82f3-5375a806ae94-kube-api-access-9z894\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.936288 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qrqh\" (UniqueName: \"kubernetes.io/projected/91060dec-59cf-4cec-90e3-e14e10456304-kube-api-access-2qrqh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vxglc\" (UID: \"91060dec-59cf-4cec-90e3-e14e10456304\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.936427 4874 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.936469 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:54.436453448 +0000 UTC m=+1064.730841999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "webhook-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.936743 4874 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: E0217 16:20:53.936767 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:54.436758775 +0000 UTC m=+1064.731147336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "metrics-server-cert" not found Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.940792 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.969549 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z894\" (UniqueName: \"kubernetes.io/projected/bb7619d6-0f36-44aa-82f3-5375a806ae94-kube-api-access-9z894\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:53 crc kubenswrapper[4874]: I0217 16:20:53.972384 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:53.994676 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.006803 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.038014 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qrqh\" (UniqueName: \"kubernetes.io/projected/91060dec-59cf-4cec-90e3-e14e10456304-kube-api-access-2qrqh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vxglc\" (UID: \"91060dec-59cf-4cec-90e3-e14e10456304\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.069129 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qrqh\" (UniqueName: \"kubernetes.io/projected/91060dec-59cf-4cec-90e3-e14e10456304-kube-api-access-2qrqh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vxglc\" (UID: \"91060dec-59cf-4cec-90e3-e14e10456304\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.140207 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:20:54 crc kubenswrapper[4874]: E0217 16:20:54.140359 4874 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:54 crc kubenswrapper[4874]: E0217 16:20:54.140406 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert podName:01ab2d32-b155-4460-ace9-60d38242218b nodeName:}" failed. No retries permitted until 2026-02-17 16:20:55.140390749 +0000 UTC m=+1065.434779310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" (UID: "01ab2d32-b155-4460-ace9-60d38242218b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.338435 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.381639 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr"] Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.409814 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl"] Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.421578 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj"] Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.457189 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.457301 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:54 crc kubenswrapper[4874]: E0217 16:20:54.457422 4874 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:20:54 crc kubenswrapper[4874]: E0217 16:20:54.457467 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:55.457452553 +0000 UTC m=+1065.751841114 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "webhook-server-cert" not found Feb 17 16:20:54 crc kubenswrapper[4874]: E0217 16:20:54.457675 4874 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:20:54 crc kubenswrapper[4874]: E0217 16:20:54.457696 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:55.457689478 +0000 UTC m=+1065.752078039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "metrics-server-cert" not found Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.574249 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78"] Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.615512 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx"] Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.663448 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:20:54 crc kubenswrapper[4874]: E0217 16:20:54.663713 4874 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:54 crc kubenswrapper[4874]: E0217 16:20:54.663768 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert podName:1127a6be-ce6c-498b-bd8c-7a131b575321 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:56.663751142 +0000 UTC m=+1066.958139723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert") pod "infra-operator-controller-manager-79d975b745-qldzr" (UID: "1127a6be-ce6c-498b-bd8c-7a131b575321") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.913985 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" event={"ID":"3f567ee8-98ac-44f3-bba2-4dfd8b514ab2","Type":"ContainerStarted","Data":"dd4992001ccc6b4796803ffde45a9e99f2bf152a94f05b684633246553c43fc6"} Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.918104 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" event={"ID":"db6537c6-cc88-4848-a428-ad573290cc02","Type":"ContainerStarted","Data":"ba90b721d890ec54f87b097e18b3edfa680e35bf6e2f531cdbcc5cd017c7fcfd"} Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.920942 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" event={"ID":"aa81f594-f3c2-43d6-ac9b-6a51e36e8d99","Type":"ContainerStarted","Data":"72edc88907789d4a2cb10ddc851b4bdd591634847a8cfa4055c20e54eec1fbe7"} Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.926147 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" event={"ID":"fd0e6a7f-7fe4-4790-a3a8-d973386bec13","Type":"ContainerStarted","Data":"73d7c6701c8689639d51b1411a700e0eead24a6f0b69168702bf92756bf459ef"} Feb 17 16:20:54 crc kubenswrapper[4874]: I0217 16:20:54.929323 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" event={"ID":"c4c6b874-8781-4030-a651-54feaeed2634","Type":"ContainerStarted","Data":"94ec716cf4602b5f1c248c77c29ff3fc0abaabc3582c3f840c456d1b43e18c18"} Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.175161 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.175528 4874 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.175573 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert podName:01ab2d32-b155-4460-ace9-60d38242218b nodeName:}" failed. No retries permitted until 2026-02-17 16:20:57.175557488 +0000 UTC m=+1067.469946039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" (UID: "01ab2d32-b155-4460-ace9-60d38242218b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.425824 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.457623 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh"] Feb 17 16:20:55 crc kubenswrapper[4874]: W0217 16:20:55.461735 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb87e1102_63f8_4f2f_9376_dab7745fb4b2.slice/crio-5afd68dab82d8f71783d6eb01b555a7c60a566f4e46041f18681dc3aff3b526b WatchSource:0}: Error finding container 5afd68dab82d8f71783d6eb01b555a7c60a566f4e46041f18681dc3aff3b526b: Status 404 returned error can't find the container with id 5afd68dab82d8f71783d6eb01b555a7c60a566f4e46041f18681dc3aff3b526b Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.479425 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.479565 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.479718 4874 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.479775 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:57.479755402 +0000 UTC m=+1067.774143963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "webhook-server-cert" not found Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.480793 4874 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.480966 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:57.480948502 +0000 UTC m=+1067.775337063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "metrics-server-cert" not found Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.511345 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c"] Feb 17 16:20:55 crc kubenswrapper[4874]: W0217 16:20:55.525208 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6873354d_473a_4bf1_b8d3_f728e268bd36.slice/crio-2efc9dc10d1a2420b71ebb53f04cb844bd6db1da8b452698eba6d69ffb539660 WatchSource:0}: Error finding container 2efc9dc10d1a2420b71ebb53f04cb844bd6db1da8b452698eba6d69ffb539660: Status 404 returned error can't find the container with id 2efc9dc10d1a2420b71ebb53f04cb844bd6db1da8b452698eba6d69ffb539660 Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.539137 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.544852 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.592137 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.594132 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.611302 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.615630 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-75swn"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.673045 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.711965 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-hk542"] Feb 17 16:20:55 crc kubenswrapper[4874]: W0217 16:20:55.726044 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73bebada_8e5b_4539_b609_2b64e42fdc35.slice/crio-cf72a95c31a3d4999923ba7b70016b315815baa3333bdaaf5546a7a2e41650db WatchSource:0}: Error finding container cf72a95c31a3d4999923ba7b70016b315815baa3333bdaaf5546a7a2e41650db: Status 404 returned error can't find the container with id cf72a95c31a3d4999923ba7b70016b315815baa3333bdaaf5546a7a2e41650db Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.751803 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-wq4gk"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.774752 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.805511 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs"] Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.810161 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc"] Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.820475 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lw5tc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-wq4gk_openstack-operators(bd668570-bbe9-4494-a20d-fd49f91dc656): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.821690 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" podUID="bd668570-bbe9-4494-a20d-fd49f91dc656" Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.824696 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m2n6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-75swn_openstack-operators(f9447f8b-df93-499d-87cd-4ccb1894c291): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.826240 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" podUID="f9447f8b-df93-499d-87cd-4ccb1894c291" Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.828580 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nkts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-m7xvs_openstack-operators(005d51f3-7446-454e-81ae-3cc46edc3aec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.829737 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" podUID="005d51f3-7446-454e-81ae-3cc46edc3aec" Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.862640 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2qrqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vxglc_openstack-operators(91060dec-59cf-4cec-90e3-e14e10456304): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.864588 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" podUID="91060dec-59cf-4cec-90e3-e14e10456304" Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.957811 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" event={"ID":"005d51f3-7446-454e-81ae-3cc46edc3aec","Type":"ContainerStarted","Data":"9542e77419971a7068ca664c978f31b427cfdc25fb3183e41ba784bcb0ac51f5"} Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.975824 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" podUID="005d51f3-7446-454e-81ae-3cc46edc3aec" Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.979463 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" event={"ID":"f9447f8b-df93-499d-87cd-4ccb1894c291","Type":"ContainerStarted","Data":"7f471be522b4ea70daba3e3d36948ec6a32c30b0d097fe3e0064b8532307fde7"} Feb 17 16:20:55 crc kubenswrapper[4874]: E0217 16:20:55.981379 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" podUID="f9447f8b-df93-499d-87cd-4ccb1894c291" Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.984304 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" event={"ID":"e9edd0a5-e9e7-4604-83e9-466212623115","Type":"ContainerStarted","Data":"a9b521348722f4c930f94c5122ed1c8b366d591a9abec55fcb7e6444fdfd6534"} Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.988070 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" event={"ID":"25da1eba-df74-4c90-90be-bb79065c4557","Type":"ContainerStarted","Data":"96f94fd66b805a5344e471e2b0fc0a48838c4a060afead050a16d055db0767a0"} Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.991499 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" event={"ID":"3603fb35-facf-4a38-8fa1-ce1efa386258","Type":"ContainerStarted","Data":"ed2828d070963ea663a12deeb24641252b7dd481c53f82eacb6c7c66cf685845"} Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.992972 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" event={"ID":"6873354d-473a-4bf1-b8d3-f728e268bd36","Type":"ContainerStarted","Data":"2efc9dc10d1a2420b71ebb53f04cb844bd6db1da8b452698eba6d69ffb539660"} Feb 17 16:20:55 crc kubenswrapper[4874]: I0217 16:20:55.999025 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" event={"ID":"91060dec-59cf-4cec-90e3-e14e10456304","Type":"ContainerStarted","Data":"34bf11b7804810166ea0ebd1497f6e30e2291a8bc038d34c4e76ef9a25b516d5"} Feb 17 16:20:56 crc kubenswrapper[4874]: E0217 16:20:56.000842 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" podUID="91060dec-59cf-4cec-90e3-e14e10456304" Feb 17 16:20:56 crc kubenswrapper[4874]: I0217 16:20:56.001093 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" event={"ID":"95fa7fde-cb3d-4b2d-ac02-f58440c35c7b","Type":"ContainerStarted","Data":"e0f1e8aa21e24c96de7584ffc2efca5857a105a8bce022405cb3378284de54d9"} Feb 17 16:20:56 crc kubenswrapper[4874]: I0217 16:20:56.007134 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" event={"ID":"b87e1102-63f8-4f2f-9376-dab7745fb4b2","Type":"ContainerStarted","Data":"5afd68dab82d8f71783d6eb01b555a7c60a566f4e46041f18681dc3aff3b526b"} Feb 17 16:20:56 crc kubenswrapper[4874]: I0217 16:20:56.008339 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" event={"ID":"e8e4298f-581a-4fdf-8347-088b955fb6ba","Type":"ContainerStarted","Data":"663a5caa8def00f2203bae6044d875a5d29b61f87e6bc05bfc9873de7739e031"} Feb 17 16:20:56 crc kubenswrapper[4874]: I0217 16:20:56.014034 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" event={"ID":"73bebada-8e5b-4539-b609-2b64e42fdc35","Type":"ContainerStarted","Data":"cf72a95c31a3d4999923ba7b70016b315815baa3333bdaaf5546a7a2e41650db"} Feb 17 16:20:56 crc kubenswrapper[4874]: I0217 16:20:56.024379 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" event={"ID":"2ea7f298-dafe-4448-8ffe-a2194f127c12","Type":"ContainerStarted","Data":"bd0256b2736a4bff6c4ac1769aceb80e7016917318c576940b45c047f3efbe5e"} Feb 17 16:20:56 crc kubenswrapper[4874]: I0217 16:20:56.056321 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" event={"ID":"bd668570-bbe9-4494-a20d-fd49f91dc656","Type":"ContainerStarted","Data":"1a32f676d04a2837139ba993869e6377ff0a1359f1f40c8c53472e63b863a13e"} Feb 17 16:20:56 crc kubenswrapper[4874]: E0217 16:20:56.061275 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" podUID="bd668570-bbe9-4494-a20d-fd49f91dc656" Feb 17 16:20:56 crc kubenswrapper[4874]: I0217 16:20:56.062270 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" event={"ID":"62899d98-d8f9-4669-90f1-d4e9e02280aa","Type":"ContainerStarted","Data":"a87220663890e29df534bc9c3a0b78e1817a3d69ed86c42543b6e0638ccf9c18"} Feb 17 16:20:56 crc kubenswrapper[4874]: I0217 16:20:56.063387 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" event={"ID":"9fdb9bed-5948-4441-a15b-34df4351b88c","Type":"ContainerStarted","Data":"9b03dab017a3b42d055f6145c86ce2b04c87de39fa9b2c155f2a00e4af21c580"} Feb 17 16:20:56 crc kubenswrapper[4874]: I0217 16:20:56.720057 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:20:56 crc kubenswrapper[4874]: E0217 16:20:56.720428 4874 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:56 crc kubenswrapper[4874]: E0217 16:20:56.720479 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert podName:1127a6be-ce6c-498b-bd8c-7a131b575321 nodeName:}" failed. No retries permitted until 2026-02-17 16:21:00.720464043 +0000 UTC m=+1071.014852604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert") pod "infra-operator-controller-manager-79d975b745-qldzr" (UID: "1127a6be-ce6c-498b-bd8c-7a131b575321") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.080903 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" podUID="f9447f8b-df93-499d-87cd-4ccb1894c291" Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.081948 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" podUID="91060dec-59cf-4cec-90e3-e14e10456304" Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.082002 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" podUID="bd668570-bbe9-4494-a20d-fd49f91dc656" Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.083736 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" podUID="005d51f3-7446-454e-81ae-3cc46edc3aec" Feb 17 16:20:57 crc kubenswrapper[4874]: I0217 16:20:57.232379 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.232534 4874 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.232607 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert podName:01ab2d32-b155-4460-ace9-60d38242218b nodeName:}" failed. No retries permitted until 2026-02-17 16:21:01.232590136 +0000 UTC m=+1071.526978697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" (UID: "01ab2d32-b155-4460-ace9-60d38242218b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:57 crc kubenswrapper[4874]: I0217 16:20:57.535980 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.536141 4874 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:20:57 crc kubenswrapper[4874]: I0217 16:20:57.536411 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.536445 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:21:01.536427451 +0000 UTC m=+1071.830816012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "webhook-server-cert" not found Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.536574 4874 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:20:57 crc kubenswrapper[4874]: E0217 16:20:57.536626 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:21:01.536610576 +0000 UTC m=+1071.830999137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "metrics-server-cert" not found Feb 17 16:21:00 crc kubenswrapper[4874]: I0217 16:21:00.808071 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:21:00 crc kubenswrapper[4874]: E0217 16:21:00.808338 4874 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:21:00 crc kubenswrapper[4874]: E0217 16:21:00.809739 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert podName:1127a6be-ce6c-498b-bd8c-7a131b575321 nodeName:}" failed. No retries permitted until 2026-02-17 16:21:08.809720452 +0000 UTC m=+1079.104109013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert") pod "infra-operator-controller-manager-79d975b745-qldzr" (UID: "1127a6be-ce6c-498b-bd8c-7a131b575321") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:21:01 crc kubenswrapper[4874]: I0217 16:21:01.317260 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:21:01 crc kubenswrapper[4874]: E0217 16:21:01.317431 4874 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:21:01 crc kubenswrapper[4874]: E0217 16:21:01.317510 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert podName:01ab2d32-b155-4460-ace9-60d38242218b nodeName:}" failed. No retries permitted until 2026-02-17 16:21:09.317489448 +0000 UTC m=+1079.611878009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" (UID: "01ab2d32-b155-4460-ace9-60d38242218b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:21:01 crc kubenswrapper[4874]: I0217 16:21:01.622634 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:21:01 crc kubenswrapper[4874]: E0217 16:21:01.622773 4874 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:21:01 crc kubenswrapper[4874]: I0217 16:21:01.622829 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:21:01 crc kubenswrapper[4874]: E0217 16:21:01.622857 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:21:09.62283656 +0000 UTC m=+1079.917225221 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "metrics-server-cert" not found Feb 17 16:21:01 crc kubenswrapper[4874]: E0217 16:21:01.623407 4874 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:21:01 crc kubenswrapper[4874]: E0217 16:21:01.623486 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs podName:bb7619d6-0f36-44aa-82f3-5375a806ae94 nodeName:}" failed. No retries permitted until 2026-02-17 16:21:09.623468796 +0000 UTC m=+1079.917857367 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-njv9r" (UID: "bb7619d6-0f36-44aa-82f3-5375a806ae94") : secret "webhook-server-cert" not found Feb 17 16:21:08 crc kubenswrapper[4874]: I0217 16:21:08.862876 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:21:08 crc kubenswrapper[4874]: I0217 16:21:08.872971 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1127a6be-ce6c-498b-bd8c-7a131b575321-cert\") pod \"infra-operator-controller-manager-79d975b745-qldzr\" (UID: \"1127a6be-ce6c-498b-bd8c-7a131b575321\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:21:08 crc kubenswrapper[4874]: E0217 16:21:08.993487 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 17 16:21:08 crc kubenswrapper[4874]: E0217 16:21:08.994004 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b88ll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-hk542_openstack-operators(73bebada-8e5b-4539-b609-2b64e42fdc35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:08 crc kubenswrapper[4874]: E0217 16:21:08.995215 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" podUID="73bebada-8e5b-4539-b609-2b64e42fdc35" Feb 17 16:21:08 crc kubenswrapper[4874]: I0217 16:21:08.999571 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:21:09 crc kubenswrapper[4874]: E0217 16:21:09.199659 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" podUID="73bebada-8e5b-4539-b609-2b64e42fdc35" Feb 17 16:21:09 crc kubenswrapper[4874]: I0217 16:21:09.372372 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:21:09 crc kubenswrapper[4874]: I0217 16:21:09.377228 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01ab2d32-b155-4460-ace9-60d38242218b-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57\" (UID: \"01ab2d32-b155-4460-ace9-60d38242218b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:21:09 crc kubenswrapper[4874]: I0217 16:21:09.403469 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:21:09 crc kubenswrapper[4874]: I0217 16:21:09.676616 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:21:09 crc kubenswrapper[4874]: I0217 16:21:09.676756 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:21:09 crc kubenswrapper[4874]: I0217 16:21:09.681652 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:21:09 crc kubenswrapper[4874]: I0217 16:21:09.682070 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb7619d6-0f36-44aa-82f3-5375a806ae94-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-njv9r\" (UID: \"bb7619d6-0f36-44aa-82f3-5375a806ae94\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:21:09 crc kubenswrapper[4874]: I0217 16:21:09.871953 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:21:10 crc kubenswrapper[4874]: E0217 16:21:10.364316 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 17 16:21:10 crc kubenswrapper[4874]: E0217 16:21:10.364526 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mm2v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-tzsfx_openstack-operators(62899d98-d8f9-4669-90f1-d4e9e02280aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:10 crc kubenswrapper[4874]: E0217 16:21:10.365830 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" podUID="62899d98-d8f9-4669-90f1-d4e9e02280aa" Feb 17 16:21:11 crc kubenswrapper[4874]: E0217 16:21:11.130842 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 17 16:21:11 crc kubenswrapper[4874]: E0217 16:21:11.131325 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m9d48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-w7lcj_openstack-operators(aa81f594-f3c2-43d6-ac9b-6a51e36e8d99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:11 crc kubenswrapper[4874]: E0217 16:21:11.132502 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" podUID="aa81f594-f3c2-43d6-ac9b-6a51e36e8d99" Feb 17 16:21:11 crc kubenswrapper[4874]: E0217 16:21:11.215273 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" podUID="62899d98-d8f9-4669-90f1-d4e9e02280aa" Feb 17 16:21:11 crc kubenswrapper[4874]: E0217 16:21:11.217060 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" podUID="aa81f594-f3c2-43d6-ac9b-6a51e36e8d99" Feb 17 16:21:11 crc kubenswrapper[4874]: E0217 16:21:11.859832 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 17 16:21:11 crc kubenswrapper[4874]: E0217 16:21:11.860149 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v9bx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-lv9qv_openstack-operators(b87e1102-63f8-4f2f-9376-dab7745fb4b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:11 crc kubenswrapper[4874]: E0217 16:21:11.861681 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" podUID="b87e1102-63f8-4f2f-9376-dab7745fb4b2" Feb 17 16:21:12 crc kubenswrapper[4874]: E0217 16:21:12.229403 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" podUID="b87e1102-63f8-4f2f-9376-dab7745fb4b2" Feb 17 16:21:12 crc kubenswrapper[4874]: E0217 16:21:12.500671 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 17 16:21:12 crc kubenswrapper[4874]: E0217 16:21:12.500864 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjdwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-jz5hv_openstack-operators(e8e4298f-581a-4fdf-8347-088b955fb6ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:12 crc kubenswrapper[4874]: E0217 16:21:12.502233 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" podUID="e8e4298f-581a-4fdf-8347-088b955fb6ba" Feb 17 16:21:13 crc kubenswrapper[4874]: E0217 16:21:13.236870 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" podUID="e8e4298f-581a-4fdf-8347-088b955fb6ba" Feb 17 16:21:15 crc kubenswrapper[4874]: E0217 16:21:15.260246 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 17 16:21:15 crc kubenswrapper[4874]: E0217 16:21:15.260725 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9ndm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-dbzhs_openstack-operators(25da1eba-df74-4c90-90be-bb79065c4557): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:15 crc kubenswrapper[4874]: E0217 16:21:15.261967 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" podUID="25da1eba-df74-4c90-90be-bb79065c4557" Feb 17 16:21:16 crc kubenswrapper[4874]: E0217 16:21:16.258953 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" podUID="25da1eba-df74-4c90-90be-bb79065c4557" Feb 17 16:21:17 crc kubenswrapper[4874]: E0217 16:21:17.644367 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 17 16:21:17 crc kubenswrapper[4874]: E0217 16:21:17.646171 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5h99k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-qtzmv_openstack-operators(6873354d-473a-4bf1-b8d3-f728e268bd36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:17 crc kubenswrapper[4874]: E0217 16:21:17.647548 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" podUID="6873354d-473a-4bf1-b8d3-f728e268bd36" Feb 17 16:21:18 crc kubenswrapper[4874]: E0217 16:21:18.191900 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 17 16:21:18 crc kubenswrapper[4874]: E0217 16:21:18.192417 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64d2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-qhh9h_openstack-operators(2ea7f298-dafe-4448-8ffe-a2194f127c12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:18 crc kubenswrapper[4874]: E0217 16:21:18.193576 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" podUID="2ea7f298-dafe-4448-8ffe-a2194f127c12" Feb 17 16:21:18 crc kubenswrapper[4874]: E0217 16:21:18.290190 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" podUID="2ea7f298-dafe-4448-8ffe-a2194f127c12" Feb 17 16:21:18 crc kubenswrapper[4874]: E0217 16:21:18.290195 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" podUID="6873354d-473a-4bf1-b8d3-f728e268bd36" Feb 17 16:21:20 crc kubenswrapper[4874]: E0217 16:21:20.420622 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 17 16:21:20 crc kubenswrapper[4874]: E0217 16:21:20.420846 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mjzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-l28fh_openstack-operators(95fa7fde-cb3d-4b2d-ac02-f58440c35c7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:20 crc kubenswrapper[4874]: E0217 16:21:20.422271 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" podUID="95fa7fde-cb3d-4b2d-ac02-f58440c35c7b" Feb 17 16:21:21 crc kubenswrapper[4874]: E0217 16:21:21.320212 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" podUID="95fa7fde-cb3d-4b2d-ac02-f58440c35c7b" Feb 17 16:21:21 crc kubenswrapper[4874]: E0217 16:21:21.698494 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 17 16:21:21 crc kubenswrapper[4874]: E0217 16:21:21.698570 4874 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 17 16:21:21 crc kubenswrapper[4874]: E0217 16:21:21.698812 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ng9tn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5d7c6cd576-cm8t8_openstack-operators(e9edd0a5-e9e7-4604-83e9-466212623115): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:21 crc kubenswrapper[4874]: E0217 16:21:21.700846 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" podUID="e9edd0a5-e9e7-4604-83e9-466212623115" Feb 17 16:21:22 crc kubenswrapper[4874]: E0217 16:21:22.330157 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" podUID="e9edd0a5-e9e7-4604-83e9-466212623115" Feb 17 16:21:25 crc kubenswrapper[4874]: E0217 16:21:25.061206 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 17 16:21:25 crc kubenswrapper[4874]: E0217 16:21:25.062019 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnc6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-jgrrk_openstack-operators(9fdb9bed-5948-4441-a15b-34df4351b88c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:25 crc kubenswrapper[4874]: E0217 16:21:25.063302 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" podUID="9fdb9bed-5948-4441-a15b-34df4351b88c" Feb 17 16:21:25 crc kubenswrapper[4874]: E0217 16:21:25.354640 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" podUID="9fdb9bed-5948-4441-a15b-34df4351b88c" Feb 17 16:21:27 crc kubenswrapper[4874]: E0217 16:21:27.749235 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 17 16:21:27 crc kubenswrapper[4874]: E0217 16:21:27.749629 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2qrqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vxglc_openstack-operators(91060dec-59cf-4cec-90e3-e14e10456304): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:27 crc kubenswrapper[4874]: E0217 16:21:27.751240 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" podUID="91060dec-59cf-4cec-90e3-e14e10456304" Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.262805 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-qldzr"] Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.374765 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r"] Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.391828 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" event={"ID":"db6537c6-cc88-4848-a428-ad573290cc02","Type":"ContainerStarted","Data":"17e76cddd4699bdcc306770a34b8a62c8ad09b05c99de2df09990a1c86c13053"} Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.391992 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.393089 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57"] Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.394020 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" event={"ID":"c4c6b874-8781-4030-a651-54feaeed2634","Type":"ContainerStarted","Data":"bbcfafbcedda945257a9950bcd01d1414655bf47eec82697ed88e87109e0f250"} Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.394745 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.396473 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" event={"ID":"1127a6be-ce6c-498b-bd8c-7a131b575321","Type":"ContainerStarted","Data":"3126175ebf80e86378034568a5469fd632b4473f9243244396f985050e37fba4"} Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.410531 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" podStartSLOduration=9.366236409 podStartE2EDuration="36.410511997s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:54.632011313 +0000 UTC m=+1064.926399874" lastFinishedPulling="2026-02-17 16:21:21.676286881 +0000 UTC m=+1091.970675462" observedRunningTime="2026-02-17 16:21:28.408473046 +0000 UTC m=+1098.702861637" watchObservedRunningTime="2026-02-17 16:21:28.410511997 +0000 UTC m=+1098.704900558" Feb 17 16:21:28 crc kubenswrapper[4874]: I0217 16:21:28.445729 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" podStartSLOduration=9.277693717 podStartE2EDuration="36.445714682s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:54.430128063 +0000 UTC m=+1064.724516624" lastFinishedPulling="2026-02-17 16:21:21.598148998 +0000 UTC m=+1091.892537589" observedRunningTime="2026-02-17 16:21:28.442680066 +0000 UTC m=+1098.737068637" watchObservedRunningTime="2026-02-17 16:21:28.445714682 +0000 UTC m=+1098.740103243" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.408472 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" event={"ID":"3603fb35-facf-4a38-8fa1-ce1efa386258","Type":"ContainerStarted","Data":"3a7341300ac5333f15a1f64996ff7a1e754e3476910721302a37c832da0cd30e"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.408944 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.410265 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" event={"ID":"e8e4298f-581a-4fdf-8347-088b955fb6ba","Type":"ContainerStarted","Data":"43ffe14dc9d23d83acfa383a96ac81a7710bf744b3d4bfe2f1828f8445d0e491"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.410535 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.416710 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" event={"ID":"62899d98-d8f9-4669-90f1-d4e9e02280aa","Type":"ContainerStarted","Data":"bc4e3498dbee8d4c867ebaa3468af2547180c76395a774bd8dea6aa055716e3b"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.416931 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.420653 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" event={"ID":"bb7619d6-0f36-44aa-82f3-5375a806ae94","Type":"ContainerStarted","Data":"7f88aa8d0eae839b06b0610db7527148885287cc6b799b7694e1a7753a4d8f7f"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.420695 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" event={"ID":"bb7619d6-0f36-44aa-82f3-5375a806ae94","Type":"ContainerStarted","Data":"38f6b0ec110cdd6fc189317d3484800c8e6ccd0ba2eea4515539cbf40c824cec"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.420768 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.422391 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" event={"ID":"01ab2d32-b155-4460-ace9-60d38242218b","Type":"ContainerStarted","Data":"699bc69874b2db8caaa70a252f538ae3a1e4ea9ee6da3de548566d3bc5648a30"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.424475 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" event={"ID":"aa81f594-f3c2-43d6-ac9b-6a51e36e8d99","Type":"ContainerStarted","Data":"aa481ad26bcac16c0772425a1a68a9d38dc3b8d23ee9ec0eb747cf178abf4a6f"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.424612 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.433033 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" podStartSLOduration=11.406305626 podStartE2EDuration="37.433015491s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.649514784 +0000 UTC m=+1065.943903345" lastFinishedPulling="2026-02-17 16:21:21.676224649 +0000 UTC m=+1091.970613210" observedRunningTime="2026-02-17 16:21:29.42735577 +0000 UTC m=+1099.721744331" watchObservedRunningTime="2026-02-17 16:21:29.433015491 +0000 UTC m=+1099.727404052" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.440400 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" event={"ID":"b87e1102-63f8-4f2f-9376-dab7745fb4b2","Type":"ContainerStarted","Data":"b15884af87592bf5f30ccf6100038cbf2b3bad2ec43fae9c41a307e124755da6"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.440926 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.450434 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" event={"ID":"bd668570-bbe9-4494-a20d-fd49f91dc656","Type":"ContainerStarted","Data":"e51e380b3d2fe21b908ec52c7ff5ca8d6496734009ad9a48d7f5ca618b82cb87"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.450641 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.453434 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" event={"ID":"fd0e6a7f-7fe4-4790-a3a8-d973386bec13","Type":"ContainerStarted","Data":"1ba04192063f9afc02c5c51dfd87fc1f2303bb62bc1bd82413a267cf89a4921b"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.453823 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.455679 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" event={"ID":"005d51f3-7446-454e-81ae-3cc46edc3aec","Type":"ContainerStarted","Data":"bf870b65b6125935137d31b161e89b58efea9c9e306ebb17e332654b56411f93"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.455885 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.459120 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" event={"ID":"73bebada-8e5b-4539-b609-2b64e42fdc35","Type":"ContainerStarted","Data":"520a28b4717548a6d28fd6b43335362105e23b5d2fedd01d234bd14e8d287956"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.459459 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.466178 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" event={"ID":"3f567ee8-98ac-44f3-bba2-4dfd8b514ab2","Type":"ContainerStarted","Data":"7cdb3e3bf829675a6a26f4806802e7a3d53261dfa061e01d3502e83b7c9de850"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.466792 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.474995 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" event={"ID":"f9447f8b-df93-499d-87cd-4ccb1894c291","Type":"ContainerStarted","Data":"a0580516da9f3b8a1ffa8cad54d26a11a04fabcfefebe3320fa3035a991938fd"} Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.475410 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.482480 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" podStartSLOduration=5.11695478 podStartE2EDuration="37.48246359s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.656834896 +0000 UTC m=+1065.951223457" lastFinishedPulling="2026-02-17 16:21:28.022343706 +0000 UTC m=+1098.316732267" observedRunningTime="2026-02-17 16:21:29.479289301 +0000 UTC m=+1099.773677862" watchObservedRunningTime="2026-02-17 16:21:29.48246359 +0000 UTC m=+1099.776852151" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.507431 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" podStartSLOduration=3.968960714 podStartE2EDuration="37.50741474s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:54.476808884 +0000 UTC m=+1064.771197445" lastFinishedPulling="2026-02-17 16:21:28.01526291 +0000 UTC m=+1098.309651471" observedRunningTime="2026-02-17 16:21:29.50215887 +0000 UTC m=+1099.796547451" watchObservedRunningTime="2026-02-17 16:21:29.50741474 +0000 UTC m=+1099.801803301" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.535462 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" podStartSLOduration=4.198982279 podStartE2EDuration="36.535446487s" podCreationTimestamp="2026-02-17 16:20:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.688301698 +0000 UTC m=+1065.982690259" lastFinishedPulling="2026-02-17 16:21:28.024765906 +0000 UTC m=+1098.319154467" observedRunningTime="2026-02-17 16:21:29.53110831 +0000 UTC m=+1099.825496881" watchObservedRunningTime="2026-02-17 16:21:29.535446487 +0000 UTC m=+1099.829835048" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.567936 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" podStartSLOduration=36.567917765 podStartE2EDuration="36.567917765s" podCreationTimestamp="2026-02-17 16:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:21:29.562175472 +0000 UTC m=+1099.856564043" watchObservedRunningTime="2026-02-17 16:21:29.567917765 +0000 UTC m=+1099.862306346" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.604333 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" podStartSLOduration=4.491054212 podStartE2EDuration="36.60431622s" podCreationTimestamp="2026-02-17 16:20:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.828366121 +0000 UTC m=+1066.122754682" lastFinishedPulling="2026-02-17 16:21:27.941628129 +0000 UTC m=+1098.236016690" observedRunningTime="2026-02-17 16:21:29.602450464 +0000 UTC m=+1099.896839025" watchObservedRunningTime="2026-02-17 16:21:29.60431622 +0000 UTC m=+1099.898704771" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.628421 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" podStartSLOduration=4.507084781 podStartE2EDuration="36.628404749s" podCreationTimestamp="2026-02-17 16:20:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.820360732 +0000 UTC m=+1066.114749293" lastFinishedPulling="2026-02-17 16:21:27.9416807 +0000 UTC m=+1098.236069261" observedRunningTime="2026-02-17 16:21:29.622589964 +0000 UTC m=+1099.916978525" watchObservedRunningTime="2026-02-17 16:21:29.628404749 +0000 UTC m=+1099.922793300" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.645562 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" podStartSLOduration=4.38638893 podStartE2EDuration="36.645547065s" podCreationTimestamp="2026-02-17 16:20:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.756500164 +0000 UTC m=+1066.050888725" lastFinishedPulling="2026-02-17 16:21:28.015658299 +0000 UTC m=+1098.310046860" observedRunningTime="2026-02-17 16:21:29.63849233 +0000 UTC m=+1099.932880891" watchObservedRunningTime="2026-02-17 16:21:29.645547065 +0000 UTC m=+1099.939935626" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.668772 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" podStartSLOduration=10.591465053 podStartE2EDuration="37.668756522s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:54.59892807 +0000 UTC m=+1064.893316631" lastFinishedPulling="2026-02-17 16:21:21.676219539 +0000 UTC m=+1091.970608100" observedRunningTime="2026-02-17 16:21:29.662185529 +0000 UTC m=+1099.956574090" watchObservedRunningTime="2026-02-17 16:21:29.668756522 +0000 UTC m=+1099.963145083" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.688838 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" podStartSLOduration=10.420540254 podStartE2EDuration="37.688821221s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:54.408390373 +0000 UTC m=+1064.702778924" lastFinishedPulling="2026-02-17 16:21:21.67667131 +0000 UTC m=+1091.971059891" observedRunningTime="2026-02-17 16:21:29.682208497 +0000 UTC m=+1099.976597068" watchObservedRunningTime="2026-02-17 16:21:29.688821221 +0000 UTC m=+1099.983209782" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.707759 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" podStartSLOduration=5.620295226 podStartE2EDuration="37.707742032s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.824600837 +0000 UTC m=+1066.118989398" lastFinishedPulling="2026-02-17 16:21:27.912047643 +0000 UTC m=+1098.206436204" observedRunningTime="2026-02-17 16:21:29.702479841 +0000 UTC m=+1099.996868412" watchObservedRunningTime="2026-02-17 16:21:29.707742032 +0000 UTC m=+1100.002130603" Feb 17 16:21:29 crc kubenswrapper[4874]: I0217 16:21:29.734580 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" podStartSLOduration=5.191598146 podStartE2EDuration="37.734562989s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.4772529 +0000 UTC m=+1065.771641461" lastFinishedPulling="2026-02-17 16:21:28.020217743 +0000 UTC m=+1098.314606304" observedRunningTime="2026-02-17 16:21:29.728373175 +0000 UTC m=+1100.022761736" watchObservedRunningTime="2026-02-17 16:21:29.734562989 +0000 UTC m=+1100.028951550" Feb 17 16:21:31 crc kubenswrapper[4874]: I0217 16:21:31.497156 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" event={"ID":"25da1eba-df74-4c90-90be-bb79065c4557","Type":"ContainerStarted","Data":"aa3f6eff8aedd2704703f09e7bbb911235823b95723b3736eb5fc6403f7e8f69"} Feb 17 16:21:31 crc kubenswrapper[4874]: I0217 16:21:31.497873 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" Feb 17 16:21:31 crc kubenswrapper[4874]: I0217 16:21:31.499780 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" event={"ID":"6873354d-473a-4bf1-b8d3-f728e268bd36","Type":"ContainerStarted","Data":"6f768d06ab85ce3c451ff5e1d7af2a7b0a5239f49bfc64cb5cce53576b069082"} Feb 17 16:21:31 crc kubenswrapper[4874]: I0217 16:21:31.517680 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" podStartSLOduration=4.399310137 podStartE2EDuration="39.517665096s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.798876727 +0000 UTC m=+1066.093265288" lastFinishedPulling="2026-02-17 16:21:30.917231696 +0000 UTC m=+1101.211620247" observedRunningTime="2026-02-17 16:21:31.515903682 +0000 UTC m=+1101.810292253" watchObservedRunningTime="2026-02-17 16:21:31.517665096 +0000 UTC m=+1101.812053687" Feb 17 16:21:31 crc kubenswrapper[4874]: I0217 16:21:31.533616 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" podStartSLOduration=4.26751361 podStartE2EDuration="39.533597452s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.6501592 +0000 UTC m=+1065.944547761" lastFinishedPulling="2026-02-17 16:21:30.916243042 +0000 UTC m=+1101.210631603" observedRunningTime="2026-02-17 16:21:31.531185062 +0000 UTC m=+1101.825573623" watchObservedRunningTime="2026-02-17 16:21:31.533597452 +0000 UTC m=+1101.827986013" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.020296 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jn9cr" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.072773 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-87l78" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.074951 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-w7lcj" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.170225 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lv9qv" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.173537 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.534394 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tzsfx" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.686257 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-f5m2c" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.725951 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-75swn" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.855678 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-jz5hv" Feb 17 16:21:33 crc kubenswrapper[4874]: I0217 16:21:33.897836 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-hk542" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.002827 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-wq4gk" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.009592 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m7xvs" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.536295 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" event={"ID":"01ab2d32-b155-4460-ace9-60d38242218b","Type":"ContainerStarted","Data":"0da211efaec94919526c41bf094e776e7dc3637e9905af1901728808f3478932"} Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.536804 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.539796 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" event={"ID":"2ea7f298-dafe-4448-8ffe-a2194f127c12","Type":"ContainerStarted","Data":"adc91b5d35f205b0b63b39d816c6fc081472b5d837b3f36b868e77035da15d4d"} Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.540462 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.543309 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" event={"ID":"e9edd0a5-e9e7-4604-83e9-466212623115","Type":"ContainerStarted","Data":"828c8bce63952c092691f581b7df885abd37112945423608b7c9d2222d1d8a4b"} Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.543525 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.546465 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" event={"ID":"1127a6be-ce6c-498b-bd8c-7a131b575321","Type":"ContainerStarted","Data":"716d3df7c7b1bf82081af41790cf178e9be230fd1204d48ee9921eb5d7d10393"} Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.546983 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.577617 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" podStartSLOduration=37.510985284 podStartE2EDuration="42.577594445s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:21:28.452579033 +0000 UTC m=+1098.746967594" lastFinishedPulling="2026-02-17 16:21:33.519188194 +0000 UTC m=+1103.813576755" observedRunningTime="2026-02-17 16:21:34.568744656 +0000 UTC m=+1104.863133227" watchObservedRunningTime="2026-02-17 16:21:34.577594445 +0000 UTC m=+1104.871983016" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.596354 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" podStartSLOduration=37.351129738 podStartE2EDuration="42.596331269s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:21:28.315521305 +0000 UTC m=+1098.609909866" lastFinishedPulling="2026-02-17 16:21:33.560722836 +0000 UTC m=+1103.855111397" observedRunningTime="2026-02-17 16:21:34.588118116 +0000 UTC m=+1104.882506697" watchObservedRunningTime="2026-02-17 16:21:34.596331269 +0000 UTC m=+1104.890719850" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.608796 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" podStartSLOduration=3.901507346 podStartE2EDuration="41.608780538s" podCreationTimestamp="2026-02-17 16:20:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.81869762 +0000 UTC m=+1066.113086181" lastFinishedPulling="2026-02-17 16:21:33.525970812 +0000 UTC m=+1103.820359373" observedRunningTime="2026-02-17 16:21:34.602869291 +0000 UTC m=+1104.897257882" watchObservedRunningTime="2026-02-17 16:21:34.608780538 +0000 UTC m=+1104.903169099" Feb 17 16:21:34 crc kubenswrapper[4874]: I0217 16:21:34.624876 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" podStartSLOduration=3.351465528 podStartE2EDuration="41.624861496s" podCreationTimestamp="2026-02-17 16:20:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.656480767 +0000 UTC m=+1065.950869318" lastFinishedPulling="2026-02-17 16:21:33.929876725 +0000 UTC m=+1104.224265286" observedRunningTime="2026-02-17 16:21:34.619954604 +0000 UTC m=+1104.914343195" watchObservedRunningTime="2026-02-17 16:21:34.624861496 +0000 UTC m=+1104.919250057" Feb 17 16:21:38 crc kubenswrapper[4874]: I0217 16:21:38.578091 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" event={"ID":"95fa7fde-cb3d-4b2d-ac02-f58440c35c7b","Type":"ContainerStarted","Data":"8289b5b3e88d86eed63222931acb48de568e5e01bdead2ab944128a0432efc23"} Feb 17 16:21:38 crc kubenswrapper[4874]: I0217 16:21:38.578690 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" Feb 17 16:21:38 crc kubenswrapper[4874]: I0217 16:21:38.602437 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" podStartSLOduration=4.47579195 podStartE2EDuration="46.602416116s" podCreationTimestamp="2026-02-17 16:20:52 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.652805485 +0000 UTC m=+1065.947194046" lastFinishedPulling="2026-02-17 16:21:37.779429641 +0000 UTC m=+1108.073818212" observedRunningTime="2026-02-17 16:21:38.59693468 +0000 UTC m=+1108.891323251" watchObservedRunningTime="2026-02-17 16:21:38.602416116 +0000 UTC m=+1108.896804677" Feb 17 16:21:39 crc kubenswrapper[4874]: I0217 16:21:39.007305 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-qldzr" Feb 17 16:21:39 crc kubenswrapper[4874]: I0217 16:21:39.410477 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57" Feb 17 16:21:39 crc kubenswrapper[4874]: I0217 16:21:39.587626 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" event={"ID":"9fdb9bed-5948-4441-a15b-34df4351b88c","Type":"ContainerStarted","Data":"3e6c39c9a7b8a6f6d5be3b10d378f02a2a886c3f7b9bdd63726c464f01c2bb1c"} Feb 17 16:21:39 crc kubenswrapper[4874]: I0217 16:21:39.588718 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" Feb 17 16:21:39 crc kubenswrapper[4874]: I0217 16:21:39.618815 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" podStartSLOduration=3.403920747 podStartE2EDuration="46.618792748s" podCreationTimestamp="2026-02-17 16:20:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.714468029 +0000 UTC m=+1066.008856590" lastFinishedPulling="2026-02-17 16:21:38.92934003 +0000 UTC m=+1109.223728591" observedRunningTime="2026-02-17 16:21:39.611497568 +0000 UTC m=+1109.905886129" watchObservedRunningTime="2026-02-17 16:21:39.618792748 +0000 UTC m=+1109.913181319" Feb 17 16:21:39 crc kubenswrapper[4874]: I0217 16:21:39.881139 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-njv9r" Feb 17 16:21:41 crc kubenswrapper[4874]: E0217 16:21:41.458922 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" podUID="91060dec-59cf-4cec-90e3-e14e10456304" Feb 17 16:21:42 crc kubenswrapper[4874]: I0217 16:21:42.873695 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-gxjgl" Feb 17 16:21:42 crc kubenswrapper[4874]: I0217 16:21:42.925802 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xgkkx" Feb 17 16:21:43 crc kubenswrapper[4874]: I0217 16:21:43.176471 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qtzmv" Feb 17 16:21:43 crc kubenswrapper[4874]: I0217 16:21:43.486849 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l28fh" Feb 17 16:21:43 crc kubenswrapper[4874]: I0217 16:21:43.581452 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-dbzhs" Feb 17 16:21:43 crc kubenswrapper[4874]: I0217 16:21:43.944980 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-qhh9h" Feb 17 16:21:43 crc kubenswrapper[4874]: I0217 16:21:43.980666 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-cm8t8" Feb 17 16:21:53 crc kubenswrapper[4874]: I0217 16:21:53.839680 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" Feb 17 16:21:56 crc kubenswrapper[4874]: I0217 16:21:56.788178 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" event={"ID":"91060dec-59cf-4cec-90e3-e14e10456304","Type":"ContainerStarted","Data":"bfc51dbd171bb80d73470e213284f20321e82b0b2ce2b4d3e7450545256578ee"} Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.309309 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxglc" podStartSLOduration=26.261560753 podStartE2EDuration="1m26.30929057s" podCreationTimestamp="2026-02-17 16:20:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:55.862471739 +0000 UTC m=+1066.156860300" lastFinishedPulling="2026-02-17 16:21:55.910201526 +0000 UTC m=+1126.204590117" observedRunningTime="2026-02-17 16:21:56.810514935 +0000 UTC m=+1127.104903536" watchObservedRunningTime="2026-02-17 16:22:19.30929057 +0000 UTC m=+1149.603679131" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.314035 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-56kpq"] Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.316153 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.320623 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.320758 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-79ht7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.320810 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.326875 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.340206 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-56kpq"] Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.405476 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8kg\" (UniqueName: \"kubernetes.io/projected/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-kube-api-access-8b8kg\") pod \"dnsmasq-dns-675f4bcbfc-56kpq\" (UID: \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\") " pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.405533 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-config\") pod \"dnsmasq-dns-675f4bcbfc-56kpq\" (UID: \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\") " pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.420983 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ttkd7"] Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.422424 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.436664 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ttkd7"] Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.454244 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.506875 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b8kg\" (UniqueName: \"kubernetes.io/projected/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-kube-api-access-8b8kg\") pod \"dnsmasq-dns-675f4bcbfc-56kpq\" (UID: \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\") " pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.506951 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8w9\" (UniqueName: \"kubernetes.io/projected/e9b427a5-a55c-4cf4-a887-57afebb7b570-kube-api-access-9q8w9\") pod \"dnsmasq-dns-78dd6ddcc-ttkd7\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.506990 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-config\") pod \"dnsmasq-dns-675f4bcbfc-56kpq\" (UID: \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\") " pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.507103 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ttkd7\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.507141 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-config\") pod \"dnsmasq-dns-78dd6ddcc-ttkd7\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.507802 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-config\") pod \"dnsmasq-dns-675f4bcbfc-56kpq\" (UID: \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\") " pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.548290 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b8kg\" (UniqueName: \"kubernetes.io/projected/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-kube-api-access-8b8kg\") pod \"dnsmasq-dns-675f4bcbfc-56kpq\" (UID: \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\") " pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.608801 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8w9\" (UniqueName: \"kubernetes.io/projected/e9b427a5-a55c-4cf4-a887-57afebb7b570-kube-api-access-9q8w9\") pod \"dnsmasq-dns-78dd6ddcc-ttkd7\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.608923 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ttkd7\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.608951 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-config\") pod \"dnsmasq-dns-78dd6ddcc-ttkd7\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.609840 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-config\") pod \"dnsmasq-dns-78dd6ddcc-ttkd7\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.610309 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ttkd7\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.629868 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8w9\" (UniqueName: \"kubernetes.io/projected/e9b427a5-a55c-4cf4-a887-57afebb7b570-kube-api-access-9q8w9\") pod \"dnsmasq-dns-78dd6ddcc-ttkd7\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.638942 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:19 crc kubenswrapper[4874]: I0217 16:22:19.746550 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:20 crc kubenswrapper[4874]: I0217 16:22:20.146899 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-56kpq"] Feb 17 16:22:20 crc kubenswrapper[4874]: W0217 16:22:20.150462 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf91e0920_f0d9_4c7d_8d9e_3af9bf0f2096.slice/crio-69b59afd156c7ceb1e929560d5004fca6b52b8835678cb85fde6193e0c910ade WatchSource:0}: Error finding container 69b59afd156c7ceb1e929560d5004fca6b52b8835678cb85fde6193e0c910ade: Status 404 returned error can't find the container with id 69b59afd156c7ceb1e929560d5004fca6b52b8835678cb85fde6193e0c910ade Feb 17 16:22:20 crc kubenswrapper[4874]: I0217 16:22:20.254307 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ttkd7"] Feb 17 16:22:21 crc kubenswrapper[4874]: I0217 16:22:21.022298 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" event={"ID":"e9b427a5-a55c-4cf4-a887-57afebb7b570","Type":"ContainerStarted","Data":"b3c8ec63a6f3cfdfd6ae54de8a8aa89a889ddf7d162a4ba88e1a0bb74e01f930"} Feb 17 16:22:21 crc kubenswrapper[4874]: I0217 16:22:21.023890 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" event={"ID":"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096","Type":"ContainerStarted","Data":"69b59afd156c7ceb1e929560d5004fca6b52b8835678cb85fde6193e0c910ade"} Feb 17 16:22:21 crc kubenswrapper[4874]: I0217 16:22:21.947274 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-56kpq"] Feb 17 16:22:21 crc kubenswrapper[4874]: I0217 16:22:21.970717 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ktlkv"] Feb 17 16:22:21 crc kubenswrapper[4874]: I0217 16:22:21.974065 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:21 crc kubenswrapper[4874]: I0217 16:22:21.979920 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ktlkv"] Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.069720 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7587\" (UniqueName: \"kubernetes.io/projected/9d38985d-0256-4859-9067-9ab1a3af1055-kube-api-access-j7587\") pod \"dnsmasq-dns-666b6646f7-ktlkv\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.069776 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-config\") pod \"dnsmasq-dns-666b6646f7-ktlkv\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.069798 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ktlkv\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.171448 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7587\" (UniqueName: \"kubernetes.io/projected/9d38985d-0256-4859-9067-9ab1a3af1055-kube-api-access-j7587\") pod \"dnsmasq-dns-666b6646f7-ktlkv\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.171516 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-config\") pod \"dnsmasq-dns-666b6646f7-ktlkv\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.171540 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ktlkv\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.172475 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ktlkv\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.172531 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-config\") pod \"dnsmasq-dns-666b6646f7-ktlkv\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.212620 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7587\" (UniqueName: \"kubernetes.io/projected/9d38985d-0256-4859-9067-9ab1a3af1055-kube-api-access-j7587\") pod \"dnsmasq-dns-666b6646f7-ktlkv\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.300716 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.331831 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ttkd7"] Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.411915 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rz47"] Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.413706 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.419492 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rz47"] Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.587148 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4lk\" (UniqueName: \"kubernetes.io/projected/5c1c05ac-d530-4e31-8b72-64e164aecf85-kube-api-access-7c4lk\") pod \"dnsmasq-dns-57d769cc4f-4rz47\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.587760 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-config\") pod \"dnsmasq-dns-57d769cc4f-4rz47\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.587804 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4rz47\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.693197 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4lk\" (UniqueName: \"kubernetes.io/projected/5c1c05ac-d530-4e31-8b72-64e164aecf85-kube-api-access-7c4lk\") pod \"dnsmasq-dns-57d769cc4f-4rz47\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.693325 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-config\") pod \"dnsmasq-dns-57d769cc4f-4rz47\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.693348 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4rz47\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.694382 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4rz47\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.695273 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-config\") pod \"dnsmasq-dns-57d769cc4f-4rz47\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.712864 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4lk\" (UniqueName: \"kubernetes.io/projected/5c1c05ac-d530-4e31-8b72-64e164aecf85-kube-api-access-7c4lk\") pod \"dnsmasq-dns-57d769cc4f-4rz47\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.768708 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:22:22 crc kubenswrapper[4874]: I0217 16:22:22.942581 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ktlkv"] Feb 17 16:22:22 crc kubenswrapper[4874]: W0217 16:22:22.945990 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d38985d_0256_4859_9067_9ab1a3af1055.slice/crio-4f48340f8cfeb744c425ebf8297fa3c5da11c09234782016b1676e6c90ffb5fb WatchSource:0}: Error finding container 4f48340f8cfeb744c425ebf8297fa3c5da11c09234782016b1676e6c90ffb5fb: Status 404 returned error can't find the container with id 4f48340f8cfeb744c425ebf8297fa3c5da11c09234782016b1676e6c90ffb5fb Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.077321 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" event={"ID":"9d38985d-0256-4859-9067-9ab1a3af1055","Type":"ContainerStarted","Data":"4f48340f8cfeb744c425ebf8297fa3c5da11c09234782016b1676e6c90ffb5fb"} Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.114827 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.129431 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.138930 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.139865 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.140146 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.141264 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.141396 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fhrv8" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.141840 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.142046 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.147626 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.173952 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.220040 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.232335 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.245390 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.285801 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.301303 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307126 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307183 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed7dc41e-9863-4c74-8675-56fca22db08a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307205 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307234 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307247 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307265 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307280 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307295 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307312 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307346 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-config-data\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307362 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307377 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307396 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307411 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed7dc41e-9863-4c74-8675-56fca22db08a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307429 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307454 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdzx\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-kube-api-access-fvdzx\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307481 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307497 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307523 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wgvh\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-kube-api-access-6wgvh\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307540 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307566 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.307582 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: W0217 16:22:23.322095 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c1c05ac_d530_4e31_8b72_64e164aecf85.slice/crio-cabe6884ea10bb7ab7fd7c195af7268350dec27bde0e0b7527fe457652d0a97d WatchSource:0}: Error finding container cabe6884ea10bb7ab7fd7c195af7268350dec27bde0e0b7527fe457652d0a97d: Status 404 returned error can't find the container with id cabe6884ea10bb7ab7fd7c195af7268350dec27bde0e0b7527fe457652d0a97d Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.336151 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rz47"] Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.410520 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.410886 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-config-data\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.410916 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.410945 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.410968 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed7dc41e-9863-4c74-8675-56fca22db08a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411002 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411045 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdzx\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-kube-api-access-fvdzx\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411107 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411134 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411163 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411209 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wgvh\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-kube-api-access-6wgvh\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411235 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-config-data\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411264 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411292 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411317 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411356 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411383 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411411 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411446 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411480 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411511 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411538 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed7dc41e-9863-4c74-8675-56fca22db08a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411586 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411625 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411647 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411675 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37707c24-e133-484d-955f-57a20ec147b1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411700 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411722 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvqt\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-kube-api-access-8nvqt\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411747 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411768 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411782 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-config-data\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411789 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411878 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.411938 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37707c24-e133-484d-955f-57a20ec147b1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.412337 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.412358 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.412677 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.412797 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.414408 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.414640 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.414739 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.420127 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.415632 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.423692 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.424856 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.424948 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.425379 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed7dc41e-9863-4c74-8675-56fca22db08a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.429901 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.430278 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.433408 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.437121 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.437154 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cd2ed7939a07d83111643c672ec8331054a35fd031224fadb7579e462a845591/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.437294 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdzx\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-kube-api-access-fvdzx\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.437622 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.437624 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed7dc41e-9863-4c74-8675-56fca22db08a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.437676 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e362c98f195cf3c54688be96913e676fce2e6ab946b229430e7647a6c41b42f7/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.444327 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wgvh\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-kube-api-access-6wgvh\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.504586 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") pod \"rabbitmq-server-0\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.510526 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") pod \"rabbitmq-server-2\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513406 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513493 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-config-data\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513517 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513555 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513589 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513632 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513736 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37707c24-e133-484d-955f-57a20ec147b1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513755 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nvqt\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-kube-api-access-8nvqt\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513797 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.513822 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37707c24-e133-484d-955f-57a20ec147b1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.514676 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.517178 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.517352 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37707c24-e133-484d-955f-57a20ec147b1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.517422 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.518017 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-config-data\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.518769 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.522031 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.527229 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37707c24-e133-484d-955f-57a20ec147b1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.530449 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.530497 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/987695d7a0e83bf6f0861a06e26b4ab95287a2edd1b9a9790bcdf5ca773dbb27/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.536951 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nvqt\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-kube-api-access-8nvqt\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.544122 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.552720 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.584518 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.588162 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.594606 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.595327 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vchwc" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.595435 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.595517 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.595703 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.595820 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.596029 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.603700 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.616107 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.616212 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.617259 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/476813ee-f26a-4068-a5e9-87b5a20fece5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.617340 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.617809 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.617878 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdr22\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-kube-api-access-qdr22\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.617902 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.617965 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.617995 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.618049 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.618148 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/476813ee-f26a-4068-a5e9-87b5a20fece5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.647029 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") pod \"rabbitmq-server-1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.719918 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.719970 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.720011 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/476813ee-f26a-4068-a5e9-87b5a20fece5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.720040 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.720122 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.720148 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdr22\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-kube-api-access-qdr22\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.720169 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.720200 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.720218 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.720246 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.720282 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/476813ee-f26a-4068-a5e9-87b5a20fece5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.721631 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.724327 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.724617 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.725214 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/476813ee-f26a-4068-a5e9-87b5a20fece5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.725225 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.725339 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.728090 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.728121 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6398513e7d6802ecff0c7960070d40c948d940184ed62b9347789a83b447027a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.728174 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.729210 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/476813ee-f26a-4068-a5e9-87b5a20fece5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.729265 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.737356 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdr22\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-kube-api-access-qdr22\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.767891 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") pod \"rabbitmq-cell1-server-0\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.777590 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.888924 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:22:23 crc kubenswrapper[4874]: I0217 16:22:23.924999 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.111952 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" event={"ID":"5c1c05ac-d530-4e31-8b72-64e164aecf85","Type":"ContainerStarted","Data":"cabe6884ea10bb7ab7fd7c195af7268350dec27bde0e0b7527fe457652d0a97d"} Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.157280 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:24 crc kubenswrapper[4874]: W0217 16:22:24.232016 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7dc41e_9863_4c74_8675_56fca22db08a.slice/crio-c5f6eb58ac15341f65c8eea915b672221075195deb7745785b2b1d1f2945447d WatchSource:0}: Error finding container c5f6eb58ac15341f65c8eea915b672221075195deb7745785b2b1d1f2945447d: Status 404 returned error can't find the container with id c5f6eb58ac15341f65c8eea915b672221075195deb7745785b2b1d1f2945447d Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.369666 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.510849 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.513394 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.517236 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gbwzp" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.517379 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.519524 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.520411 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.523153 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.529256 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.534136 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.642065 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.647342 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99a20bb-50d6-4806-ac2a-2e2276d561ef-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.647366 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a20bb-50d6-4806-ac2a-2e2276d561ef-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.647414 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c99a20bb-50d6-4806-ac2a-2e2276d561ef-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.647448 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c99a20bb-50d6-4806-ac2a-2e2276d561ef-kolla-config\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.647507 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6804041f-5f89-4fb5-9a32-fc88dca92ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6804041f-5f89-4fb5-9a32-fc88dca92ba5\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.647531 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c99a20bb-50d6-4806-ac2a-2e2276d561ef-config-data-default\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.648333 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c99a20bb-50d6-4806-ac2a-2e2276d561ef-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.648550 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6xr2\" (UniqueName: \"kubernetes.io/projected/c99a20bb-50d6-4806-ac2a-2e2276d561ef-kube-api-access-r6xr2\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: W0217 16:22:24.654295 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod476813ee_f26a_4068_a5e9_87b5a20fece5.slice/crio-0b5d0c2ebe5cd9bb260c9898ef7e9a25e1d7f87345021cc7434e62138aa39678 WatchSource:0}: Error finding container 0b5d0c2ebe5cd9bb260c9898ef7e9a25e1d7f87345021cc7434e62138aa39678: Status 404 returned error can't find the container with id 0b5d0c2ebe5cd9bb260c9898ef7e9a25e1d7f87345021cc7434e62138aa39678 Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.751113 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c99a20bb-50d6-4806-ac2a-2e2276d561ef-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.751165 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c99a20bb-50d6-4806-ac2a-2e2276d561ef-kolla-config\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.751238 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6804041f-5f89-4fb5-9a32-fc88dca92ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6804041f-5f89-4fb5-9a32-fc88dca92ba5\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.751263 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c99a20bb-50d6-4806-ac2a-2e2276d561ef-config-data-default\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.751317 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c99a20bb-50d6-4806-ac2a-2e2276d561ef-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.751341 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6xr2\" (UniqueName: \"kubernetes.io/projected/c99a20bb-50d6-4806-ac2a-2e2276d561ef-kube-api-access-r6xr2\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.751371 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99a20bb-50d6-4806-ac2a-2e2276d561ef-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.751384 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a20bb-50d6-4806-ac2a-2e2276d561ef-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.752304 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c99a20bb-50d6-4806-ac2a-2e2276d561ef-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.752695 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c99a20bb-50d6-4806-ac2a-2e2276d561ef-config-data-default\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.753638 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c99a20bb-50d6-4806-ac2a-2e2276d561ef-kolla-config\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.759612 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99a20bb-50d6-4806-ac2a-2e2276d561ef-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.762297 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c99a20bb-50d6-4806-ac2a-2e2276d561ef-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.762547 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.762578 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6804041f-5f89-4fb5-9a32-fc88dca92ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6804041f-5f89-4fb5-9a32-fc88dca92ba5\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/babef86c4066cf2a4b2f96d2d259c4471fed6f8ae0d4d362c8708208c788131a/globalmount\"" pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.767672 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a20bb-50d6-4806-ac2a-2e2276d561ef-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.796435 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6xr2\" (UniqueName: \"kubernetes.io/projected/c99a20bb-50d6-4806-ac2a-2e2276d561ef-kube-api-access-r6xr2\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.830384 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6804041f-5f89-4fb5-9a32-fc88dca92ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6804041f-5f89-4fb5-9a32-fc88dca92ba5\") pod \"openstack-galera-0\" (UID: \"c99a20bb-50d6-4806-ac2a-2e2276d561ef\") " pod="openstack/openstack-galera-0" Feb 17 16:22:24 crc kubenswrapper[4874]: I0217 16:22:24.867962 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:22:25 crc kubenswrapper[4874]: I0217 16:22:25.207617 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37707c24-e133-484d-955f-57a20ec147b1","Type":"ContainerStarted","Data":"8b6eac39626336380a637ffb52c23f53a17735099cbb674fe479447d2f1c66c0"} Feb 17 16:22:25 crc kubenswrapper[4874]: I0217 16:22:25.208935 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"476813ee-f26a-4068-a5e9-87b5a20fece5","Type":"ContainerStarted","Data":"0b5d0c2ebe5cd9bb260c9898ef7e9a25e1d7f87345021cc7434e62138aa39678"} Feb 17 16:22:25 crc kubenswrapper[4874]: I0217 16:22:25.209727 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ed7dc41e-9863-4c74-8675-56fca22db08a","Type":"ContainerStarted","Data":"c5f6eb58ac15341f65c8eea915b672221075195deb7745785b2b1d1f2945447d"} Feb 17 16:22:25 crc kubenswrapper[4874]: I0217 16:22:25.210858 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7eb994d5-6ecb-4a2d-bafc-86c9f107802c","Type":"ContainerStarted","Data":"97f0e93e129651a61d084af71536450c5ecce88efe1a23e5f011bf9f6280dbc1"} Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.600889 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:22:28 crc kubenswrapper[4874]: W0217 16:22:25.658158 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc99a20bb_50d6_4806_ac2a_2e2276d561ef.slice/crio-a78124ec40e831ff3433aa6f26b5bc845c4e27f143ad668978f5a68dbce0adde WatchSource:0}: Error finding container a78124ec40e831ff3433aa6f26b5bc845c4e27f143ad668978f5a68dbce0adde: Status 404 returned error can't find the container with id a78124ec40e831ff3433aa6f26b5bc845c4e27f143ad668978f5a68dbce0adde Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.795725 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.797416 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.837561 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.837762 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kllp6" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.837874 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.838419 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.870905 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.873279 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbmjl\" (UniqueName: \"kubernetes.io/projected/9535b3e4-e580-4939-9f0f-f57e7b3946c6-kube-api-access-bbmjl\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.873328 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9535b3e4-e580-4939-9f0f-f57e7b3946c6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.873367 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9535b3e4-e580-4939-9f0f-f57e7b3946c6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.873405 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9535b3e4-e580-4939-9f0f-f57e7b3946c6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.873427 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9535b3e4-e580-4939-9f0f-f57e7b3946c6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.873455 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cc217afe-e88e-46bc-979c-6b88dac1a9da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc217afe-e88e-46bc-979c-6b88dac1a9da\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.873474 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9535b3e4-e580-4939-9f0f-f57e7b3946c6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.873523 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9535b3e4-e580-4939-9f0f-f57e7b3946c6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.979403 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbmjl\" (UniqueName: \"kubernetes.io/projected/9535b3e4-e580-4939-9f0f-f57e7b3946c6-kube-api-access-bbmjl\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.979458 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9535b3e4-e580-4939-9f0f-f57e7b3946c6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.979502 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9535b3e4-e580-4939-9f0f-f57e7b3946c6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.979547 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9535b3e4-e580-4939-9f0f-f57e7b3946c6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.979567 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9535b3e4-e580-4939-9f0f-f57e7b3946c6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.979600 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cc217afe-e88e-46bc-979c-6b88dac1a9da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc217afe-e88e-46bc-979c-6b88dac1a9da\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.979623 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9535b3e4-e580-4939-9f0f-f57e7b3946c6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.979691 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9535b3e4-e580-4939-9f0f-f57e7b3946c6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.980547 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9535b3e4-e580-4939-9f0f-f57e7b3946c6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.980858 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9535b3e4-e580-4939-9f0f-f57e7b3946c6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.981466 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9535b3e4-e580-4939-9f0f-f57e7b3946c6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.981934 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9535b3e4-e580-4939-9f0f-f57e7b3946c6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.990250 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9535b3e4-e580-4939-9f0f-f57e7b3946c6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.991126 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:25.991145 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cc217afe-e88e-46bc-979c-6b88dac1a9da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc217afe-e88e-46bc-979c-6b88dac1a9da\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f444d82532f10194b58814a3d632f6b0b3fd3fefb33d45e2c0e5cf6f694a4f8c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.001909 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9535b3e4-e580-4939-9f0f-f57e7b3946c6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.037791 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbmjl\" (UniqueName: \"kubernetes.io/projected/9535b3e4-e580-4939-9f0f-f57e7b3946c6-kube-api-access-bbmjl\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.209009 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cc217afe-e88e-46bc-979c-6b88dac1a9da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc217afe-e88e-46bc-979c-6b88dac1a9da\") pod \"openstack-cell1-galera-0\" (UID: \"9535b3e4-e580-4939-9f0f-f57e7b3946c6\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.240560 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c99a20bb-50d6-4806-ac2a-2e2276d561ef","Type":"ContainerStarted","Data":"a78124ec40e831ff3433aa6f26b5bc845c4e27f143ad668978f5a68dbce0adde"} Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.301200 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.303389 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.305478 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-456pm" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.306802 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.306948 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.324661 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.389234 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9093ae6e-39ee-47ca-b0d2-944be9ce4971-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.389314 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwvbt\" (UniqueName: \"kubernetes.io/projected/9093ae6e-39ee-47ca-b0d2-944be9ce4971-kube-api-access-vwvbt\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.389352 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9093ae6e-39ee-47ca-b0d2-944be9ce4971-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.389398 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9093ae6e-39ee-47ca-b0d2-944be9ce4971-kolla-config\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.389559 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9093ae6e-39ee-47ca-b0d2-944be9ce4971-config-data\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.455578 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.491754 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9093ae6e-39ee-47ca-b0d2-944be9ce4971-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.492085 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwvbt\" (UniqueName: \"kubernetes.io/projected/9093ae6e-39ee-47ca-b0d2-944be9ce4971-kube-api-access-vwvbt\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.492124 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9093ae6e-39ee-47ca-b0d2-944be9ce4971-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.492168 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9093ae6e-39ee-47ca-b0d2-944be9ce4971-kolla-config\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.492199 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9093ae6e-39ee-47ca-b0d2-944be9ce4971-config-data\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.494861 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9093ae6e-39ee-47ca-b0d2-944be9ce4971-kolla-config\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.496734 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9093ae6e-39ee-47ca-b0d2-944be9ce4971-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.497413 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9093ae6e-39ee-47ca-b0d2-944be9ce4971-config-data\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.499737 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9093ae6e-39ee-47ca-b0d2-944be9ce4971-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.512415 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwvbt\" (UniqueName: \"kubernetes.io/projected/9093ae6e-39ee-47ca-b0d2-944be9ce4971-kube-api-access-vwvbt\") pod \"memcached-0\" (UID: \"9093ae6e-39ee-47ca-b0d2-944be9ce4971\") " pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:26.643548 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:28.757402 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:28.769728 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:28.887829 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:28.888926 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:28.893449 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-llq77" Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:28.917752 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:22:28 crc kubenswrapper[4874]: I0217 16:22:28.987587 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skpcr\" (UniqueName: \"kubernetes.io/projected/e1154a55-d86f-4c56-82d4-4d63c35feceb-kube-api-access-skpcr\") pod \"kube-state-metrics-0\" (UID: \"e1154a55-d86f-4c56-82d4-4d63c35feceb\") " pod="openstack/kube-state-metrics-0" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.090963 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skpcr\" (UniqueName: \"kubernetes.io/projected/e1154a55-d86f-4c56-82d4-4d63c35feceb-kube-api-access-skpcr\") pod \"kube-state-metrics-0\" (UID: \"e1154a55-d86f-4c56-82d4-4d63c35feceb\") " pod="openstack/kube-state-metrics-0" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.156586 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skpcr\" (UniqueName: \"kubernetes.io/projected/e1154a55-d86f-4c56-82d4-4d63c35feceb-kube-api-access-skpcr\") pod \"kube-state-metrics-0\" (UID: \"e1154a55-d86f-4c56-82d4-4d63c35feceb\") " pod="openstack/kube-state-metrics-0" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.243488 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.649395 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47"] Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.677085 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.678175 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47"] Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.681964 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-zg449" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.682527 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.818221 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4bp\" (UniqueName: \"kubernetes.io/projected/4771c857-23aa-4647-a63d-d7a1977ffaa4-kube-api-access-4n4bp\") pod \"observability-ui-dashboards-66cbf594b5-mpn47\" (UID: \"4771c857-23aa-4647-a63d-d7a1977ffaa4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.818441 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4771c857-23aa-4647-a63d-d7a1977ffaa4-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mpn47\" (UID: \"4771c857-23aa-4647-a63d-d7a1977ffaa4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.920750 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4771c857-23aa-4647-a63d-d7a1977ffaa4-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mpn47\" (UID: \"4771c857-23aa-4647-a63d-d7a1977ffaa4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" Feb 17 16:22:29 crc kubenswrapper[4874]: E0217 16:22:29.920907 4874 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 17 16:22:29 crc kubenswrapper[4874]: E0217 16:22:29.923202 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4771c857-23aa-4647-a63d-d7a1977ffaa4-serving-cert podName:4771c857-23aa-4647-a63d-d7a1977ffaa4 nodeName:}" failed. No retries permitted until 2026-02-17 16:22:30.423172933 +0000 UTC m=+1160.717561494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4771c857-23aa-4647-a63d-d7a1977ffaa4-serving-cert") pod "observability-ui-dashboards-66cbf594b5-mpn47" (UID: "4771c857-23aa-4647-a63d-d7a1977ffaa4") : secret "observability-ui-dashboards" not found Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.923295 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n4bp\" (UniqueName: \"kubernetes.io/projected/4771c857-23aa-4647-a63d-d7a1977ffaa4-kube-api-access-4n4bp\") pod \"observability-ui-dashboards-66cbf594b5-mpn47\" (UID: \"4771c857-23aa-4647-a63d-d7a1977ffaa4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.954705 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f45468f6f-22lbn"] Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.956006 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:29 crc kubenswrapper[4874]: I0217 16:22:29.975415 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n4bp\" (UniqueName: \"kubernetes.io/projected/4771c857-23aa-4647-a63d-d7a1977ffaa4-kube-api-access-4n4bp\") pod \"observability-ui-dashboards-66cbf594b5-mpn47\" (UID: \"4771c857-23aa-4647-a63d-d7a1977ffaa4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.017649 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f45468f6f-22lbn"] Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.090368 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.092401 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.098504 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.098873 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-98kh9" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.099063 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.099106 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.099202 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.099531 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.110488 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.111159 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.121317 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.126498 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-trusted-ca-bundle\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.126581 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-service-ca\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.126609 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/962f211e-4cdc-4a53-bc16-3a84d18e69b3-console-serving-cert\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.127695 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg9sf\" (UniqueName: \"kubernetes.io/projected/962f211e-4cdc-4a53-bc16-3a84d18e69b3-kube-api-access-lg9sf\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.127754 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-oauth-serving-cert\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.127788 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-console-config\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.127825 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/962f211e-4cdc-4a53-bc16-3a84d18e69b3-console-oauth-config\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229577 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-console-config\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229648 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229688 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/962f211e-4cdc-4a53-bc16-3a84d18e69b3-console-oauth-config\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229721 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229758 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229807 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-trusted-ca-bundle\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229846 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229910 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229939 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229958 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-service-ca\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.229982 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/962f211e-4cdc-4a53-bc16-3a84d18e69b3-console-serving-cert\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.230022 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8rm\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-kube-api-access-gn8rm\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.230047 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.230093 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg9sf\" (UniqueName: \"kubernetes.io/projected/962f211e-4cdc-4a53-bc16-3a84d18e69b3-kube-api-access-lg9sf\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.230130 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.230146 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.230169 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-oauth-serving-cert\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.231295 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-trusted-ca-bundle\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.231449 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-console-config\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.231867 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-oauth-serving-cert\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.237681 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/962f211e-4cdc-4a53-bc16-3a84d18e69b3-service-ca\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.239475 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/962f211e-4cdc-4a53-bc16-3a84d18e69b3-console-serving-cert\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.255238 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/962f211e-4cdc-4a53-bc16-3a84d18e69b3-console-oauth-config\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.274593 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg9sf\" (UniqueName: \"kubernetes.io/projected/962f211e-4cdc-4a53-bc16-3a84d18e69b3-kube-api-access-lg9sf\") pod \"console-f45468f6f-22lbn\" (UID: \"962f211e-4cdc-4a53-bc16-3a84d18e69b3\") " pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.326761 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.336525 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8rm\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-kube-api-access-gn8rm\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.336725 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.336810 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.336839 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.336902 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.336960 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.337007 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.337100 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.337160 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.337206 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.338172 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.339014 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.347067 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.348388 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.350954 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.355983 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.366794 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.377814 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.377865 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bac90798cae0603f2cddffed7e2fcd4826a4a45d6415d5b4e65c98946b029a54/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.379002 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8rm\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-kube-api-access-gn8rm\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.384964 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.441398 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") pod \"prometheus-metric-storage-0\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.441669 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4771c857-23aa-4647-a63d-d7a1977ffaa4-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mpn47\" (UID: \"4771c857-23aa-4647-a63d-d7a1977ffaa4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.446590 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4771c857-23aa-4647-a63d-d7a1977ffaa4-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mpn47\" (UID: \"4771c857-23aa-4647-a63d-d7a1977ffaa4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.642616 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" Feb 17 16:22:30 crc kubenswrapper[4874]: I0217 16:22:30.720862 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.382286 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tpgc2"] Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.384122 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.388564 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.388776 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.388892 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-mgxmf" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.404319 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tpgc2"] Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.450749 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pzc25"] Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.453411 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.459439 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pzc25"] Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.492858 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4132e8e3-7498-4df0-9d6d-2dd7c096218a-combined-ca-bundle\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.493360 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrstx\" (UniqueName: \"kubernetes.io/projected/4132e8e3-7498-4df0-9d6d-2dd7c096218a-kube-api-access-mrstx\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.493999 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4132e8e3-7498-4df0-9d6d-2dd7c096218a-var-run-ovn\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.494052 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4132e8e3-7498-4df0-9d6d-2dd7c096218a-ovn-controller-tls-certs\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.503920 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4132e8e3-7498-4df0-9d6d-2dd7c096218a-scripts\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.503982 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4132e8e3-7498-4df0-9d6d-2dd7c096218a-var-log-ovn\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.504046 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4132e8e3-7498-4df0-9d6d-2dd7c096218a-var-run\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605523 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4132e8e3-7498-4df0-9d6d-2dd7c096218a-scripts\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605572 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-etc-ovs\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605594 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4132e8e3-7498-4df0-9d6d-2dd7c096218a-var-log-ovn\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605625 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4132e8e3-7498-4df0-9d6d-2dd7c096218a-var-run\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605671 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4132e8e3-7498-4df0-9d6d-2dd7c096218a-combined-ca-bundle\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605766 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80cf5dc3-e4d1-4d7c-b598-36a083080a66-scripts\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605788 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrstx\" (UniqueName: \"kubernetes.io/projected/4132e8e3-7498-4df0-9d6d-2dd7c096218a-kube-api-access-mrstx\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605813 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-var-log\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605844 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd5j2\" (UniqueName: \"kubernetes.io/projected/80cf5dc3-e4d1-4d7c-b598-36a083080a66-kube-api-access-jd5j2\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605860 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-var-run\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605895 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4132e8e3-7498-4df0-9d6d-2dd7c096218a-var-run-ovn\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605920 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4132e8e3-7498-4df0-9d6d-2dd7c096218a-ovn-controller-tls-certs\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.605951 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-var-lib\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.606708 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4132e8e3-7498-4df0-9d6d-2dd7c096218a-var-log-ovn\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.606799 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4132e8e3-7498-4df0-9d6d-2dd7c096218a-var-run\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.606884 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4132e8e3-7498-4df0-9d6d-2dd7c096218a-var-run-ovn\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.608128 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4132e8e3-7498-4df0-9d6d-2dd7c096218a-scripts\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.613144 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4132e8e3-7498-4df0-9d6d-2dd7c096218a-ovn-controller-tls-certs\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.623706 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4132e8e3-7498-4df0-9d6d-2dd7c096218a-combined-ca-bundle\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.637616 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrstx\" (UniqueName: \"kubernetes.io/projected/4132e8e3-7498-4df0-9d6d-2dd7c096218a-kube-api-access-mrstx\") pod \"ovn-controller-tpgc2\" (UID: \"4132e8e3-7498-4df0-9d6d-2dd7c096218a\") " pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.710808 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-etc-ovs\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.710913 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80cf5dc3-e4d1-4d7c-b598-36a083080a66-scripts\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.710932 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-var-log\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.710950 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd5j2\" (UniqueName: \"kubernetes.io/projected/80cf5dc3-e4d1-4d7c-b598-36a083080a66-kube-api-access-jd5j2\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.710967 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-var-run\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.711028 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-var-lib\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.711029 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-etc-ovs\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.711197 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-var-lib\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.711205 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-var-log\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.711239 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/80cf5dc3-e4d1-4d7c-b598-36a083080a66-var-run\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.712891 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80cf5dc3-e4d1-4d7c-b598-36a083080a66-scripts\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.745553 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.747594 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd5j2\" (UniqueName: \"kubernetes.io/projected/80cf5dc3-e4d1-4d7c-b598-36a083080a66-kube-api-access-jd5j2\") pod \"ovn-controller-ovs-pzc25\" (UID: \"80cf5dc3-e4d1-4d7c-b598-36a083080a66\") " pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:31 crc kubenswrapper[4874]: I0217 16:22:31.823556 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.259012 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.263088 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.267340 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-p2lbk" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.267345 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.267880 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.268098 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.268310 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.277334 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.323044 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mmj\" (UniqueName: \"kubernetes.io/projected/f95c3b85-c546-47d7-9b75-7577455ab464-kube-api-access-t5mmj\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.323229 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f95c3b85-c546-47d7-9b75-7577455ab464-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.323291 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95c3b85-c546-47d7-9b75-7577455ab464-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.323322 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f95c3b85-c546-47d7-9b75-7577455ab464-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.323339 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95c3b85-c546-47d7-9b75-7577455ab464-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.323385 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9ca67bee-3502-4ada-84fa-3a01a94804f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ca67bee-3502-4ada-84fa-3a01a94804f8\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.323418 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95c3b85-c546-47d7-9b75-7577455ab464-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.323488 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f95c3b85-c546-47d7-9b75-7577455ab464-config\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.426742 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mmj\" (UniqueName: \"kubernetes.io/projected/f95c3b85-c546-47d7-9b75-7577455ab464-kube-api-access-t5mmj\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.426791 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f95c3b85-c546-47d7-9b75-7577455ab464-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.426826 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95c3b85-c546-47d7-9b75-7577455ab464-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.426849 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f95c3b85-c546-47d7-9b75-7577455ab464-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.426864 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95c3b85-c546-47d7-9b75-7577455ab464-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.426889 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9ca67bee-3502-4ada-84fa-3a01a94804f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ca67bee-3502-4ada-84fa-3a01a94804f8\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.426918 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95c3b85-c546-47d7-9b75-7577455ab464-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.426960 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f95c3b85-c546-47d7-9b75-7577455ab464-config\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.427553 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f95c3b85-c546-47d7-9b75-7577455ab464-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.427961 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f95c3b85-c546-47d7-9b75-7577455ab464-config\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.428384 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f95c3b85-c546-47d7-9b75-7577455ab464-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.431037 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95c3b85-c546-47d7-9b75-7577455ab464-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.435738 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95c3b85-c546-47d7-9b75-7577455ab464-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.437245 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.437297 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9ca67bee-3502-4ada-84fa-3a01a94804f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ca67bee-3502-4ada-84fa-3a01a94804f8\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bc4b0fe86232b8a24c4fd7049826a68a1b0b45ffc6e3c6bf6272ee23853ad360/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.444224 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mmj\" (UniqueName: \"kubernetes.io/projected/f95c3b85-c546-47d7-9b75-7577455ab464-kube-api-access-t5mmj\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.447686 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f95c3b85-c546-47d7-9b75-7577455ab464-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.491131 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9ca67bee-3502-4ada-84fa-3a01a94804f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9ca67bee-3502-4ada-84fa-3a01a94804f8\") pod \"ovsdbserver-nb-0\" (UID: \"f95c3b85-c546-47d7-9b75-7577455ab464\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:32 crc kubenswrapper[4874]: I0217 16:22:32.600609 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:33 crc kubenswrapper[4874]: W0217 16:22:33.840731 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9535b3e4_e580_4939_9f0f_f57e7b3946c6.slice/crio-f2c89de4a3a18df67661d171a58a7d565165f5d850c6882f4d16fc296c7fd505 WatchSource:0}: Error finding container f2c89de4a3a18df67661d171a58a7d565165f5d850c6882f4d16fc296c7fd505: Status 404 returned error can't find the container with id f2c89de4a3a18df67661d171a58a7d565165f5d850c6882f4d16fc296c7fd505 Feb 17 16:22:33 crc kubenswrapper[4874]: W0217 16:22:33.847731 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9093ae6e_39ee_47ca_b0d2_944be9ce4971.slice/crio-65880437cf3380efb989fbc5beb6274d2c7a637d90e78d140e39104e3ba76ec2 WatchSource:0}: Error finding container 65880437cf3380efb989fbc5beb6274d2c7a637d90e78d140e39104e3ba76ec2: Status 404 returned error can't find the container with id 65880437cf3380efb989fbc5beb6274d2c7a637d90e78d140e39104e3ba76ec2 Feb 17 16:22:34 crc kubenswrapper[4874]: I0217 16:22:34.362681 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9535b3e4-e580-4939-9f0f-f57e7b3946c6","Type":"ContainerStarted","Data":"f2c89de4a3a18df67661d171a58a7d565165f5d850c6882f4d16fc296c7fd505"} Feb 17 16:22:34 crc kubenswrapper[4874]: I0217 16:22:34.364257 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9093ae6e-39ee-47ca-b0d2-944be9ce4971","Type":"ContainerStarted","Data":"65880437cf3380efb989fbc5beb6274d2c7a637d90e78d140e39104e3ba76ec2"} Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.561644 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.564232 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.568195 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.569782 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.569872 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pprnl" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.570306 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.570376 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.688661 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b366ca-9778-45e9-8d34-5708857a85cc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.688852 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2b366ca-9778-45e9-8d34-5708857a85cc-config\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.688880 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b366ca-9778-45e9-8d34-5708857a85cc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.688896 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b366ca-9778-45e9-8d34-5708857a85cc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.688991 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2b366ca-9778-45e9-8d34-5708857a85cc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.689104 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2b366ca-9778-45e9-8d34-5708857a85cc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.689207 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d57c6f7-22a0-4acc-a028-80e66daae3fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d57c6f7-22a0-4acc-a028-80e66daae3fb\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.689249 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7w5j\" (UniqueName: \"kubernetes.io/projected/c2b366ca-9778-45e9-8d34-5708857a85cc-kube-api-access-m7w5j\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.791112 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b366ca-9778-45e9-8d34-5708857a85cc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.791201 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2b366ca-9778-45e9-8d34-5708857a85cc-config\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.791231 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b366ca-9778-45e9-8d34-5708857a85cc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.791250 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b366ca-9778-45e9-8d34-5708857a85cc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.791292 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2b366ca-9778-45e9-8d34-5708857a85cc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.791892 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2b366ca-9778-45e9-8d34-5708857a85cc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.792389 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2b366ca-9778-45e9-8d34-5708857a85cc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.792653 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2b366ca-9778-45e9-8d34-5708857a85cc-config\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.793250 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2b366ca-9778-45e9-8d34-5708857a85cc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.793341 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d57c6f7-22a0-4acc-a028-80e66daae3fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d57c6f7-22a0-4acc-a028-80e66daae3fb\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.793383 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7w5j\" (UniqueName: \"kubernetes.io/projected/c2b366ca-9778-45e9-8d34-5708857a85cc-kube-api-access-m7w5j\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.797341 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b366ca-9778-45e9-8d34-5708857a85cc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.798194 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b366ca-9778-45e9-8d34-5708857a85cc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.798432 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b366ca-9778-45e9-8d34-5708857a85cc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.800045 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.800111 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d57c6f7-22a0-4acc-a028-80e66daae3fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d57c6f7-22a0-4acc-a028-80e66daae3fb\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/034e2d59ff08f4438719be7c8cb4ae2bf8aee9d26a5407f22f5b53d6a40848f6/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.808935 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7w5j\" (UniqueName: \"kubernetes.io/projected/c2b366ca-9778-45e9-8d34-5708857a85cc-kube-api-access-m7w5j\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.840592 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d57c6f7-22a0-4acc-a028-80e66daae3fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d57c6f7-22a0-4acc-a028-80e66daae3fb\") pod \"ovsdbserver-sb-0\" (UID: \"c2b366ca-9778-45e9-8d34-5708857a85cc\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:35 crc kubenswrapper[4874]: I0217 16:22:35.884355 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:46 crc kubenswrapper[4874]: E0217 16:22:46.628436 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 17 16:22:46 crc kubenswrapper[4874]: E0217 16:22:46.629304 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdr22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(476813ee-f26a-4068-a5e9-87b5a20fece5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:22:46 crc kubenswrapper[4874]: E0217 16:22:46.630958 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="476813ee-f26a-4068-a5e9-87b5a20fece5" Feb 17 16:22:46 crc kubenswrapper[4874]: E0217 16:22:46.640517 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 17 16:22:46 crc kubenswrapper[4874]: E0217 16:22:46.641414 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvdzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(ed7dc41e-9863-4c74-8675-56fca22db08a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:22:46 crc kubenswrapper[4874]: E0217 16:22:46.643166 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="ed7dc41e-9863-4c74-8675-56fca22db08a" Feb 17 16:22:46 crc kubenswrapper[4874]: E0217 16:22:46.682879 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 17 16:22:46 crc kubenswrapper[4874]: E0217 16:22:46.683273 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wgvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(7eb994d5-6ecb-4a2d-bafc-86c9f107802c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:22:46 crc kubenswrapper[4874]: E0217 16:22:46.684553 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" Feb 17 16:22:47 crc kubenswrapper[4874]: E0217 16:22:47.505656 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="ed7dc41e-9863-4c74-8675-56fca22db08a" Feb 17 16:22:47 crc kubenswrapper[4874]: E0217 16:22:47.505738 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="476813ee-f26a-4068-a5e9-87b5a20fece5" Feb 17 16:22:47 crc kubenswrapper[4874]: E0217 16:22:47.506875 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" Feb 17 16:22:47 crc kubenswrapper[4874]: E0217 16:22:47.684522 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:22:47 crc kubenswrapper[4874]: E0217 16:22:47.684697 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7c4lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-4rz47_openstack(5c1c05ac-d530-4e31-8b72-64e164aecf85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:22:47 crc kubenswrapper[4874]: E0217 16:22:47.685894 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" podUID="5c1c05ac-d530-4e31-8b72-64e164aecf85" Feb 17 16:22:48 crc kubenswrapper[4874]: E0217 16:22:48.415228 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 17 16:22:48 crc kubenswrapper[4874]: E0217 16:22:48.415732 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n78hfch5f7hb4h5b8h557h54hfdh689h568hbch59bh57bh6dh645h5f8h566h67bhf7h7ch57dh5c5h585h87h669h58bh8ch64dh94h68fhbfh55cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwvbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(9093ae6e-39ee-47ca-b0d2-944be9ce4971): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:22:48 crc kubenswrapper[4874]: E0217 16:22:48.417480 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="9093ae6e-39ee-47ca-b0d2-944be9ce4971" Feb 17 16:22:48 crc kubenswrapper[4874]: E0217 16:22:48.475953 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:22:48 crc kubenswrapper[4874]: E0217 16:22:48.476108 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9q8w9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ttkd7_openstack(e9b427a5-a55c-4cf4-a887-57afebb7b570): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:22:48 crc kubenswrapper[4874]: E0217 16:22:48.479293 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" podUID="e9b427a5-a55c-4cf4-a887-57afebb7b570" Feb 17 16:22:48 crc kubenswrapper[4874]: E0217 16:22:48.521436 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="9093ae6e-39ee-47ca-b0d2-944be9ce4971" Feb 17 16:22:48 crc kubenswrapper[4874]: E0217 16:22:48.521519 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" podUID="5c1c05ac-d530-4e31-8b72-64e164aecf85" Feb 17 16:22:50 crc kubenswrapper[4874]: E0217 16:22:50.540484 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 16:22:50 crc kubenswrapper[4874]: E0217 16:22:50.541321 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6xr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(c99a20bb-50d6-4806-ac2a-2e2276d561ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:22:50 crc kubenswrapper[4874]: E0217 16:22:50.542564 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="c99a20bb-50d6-4806-ac2a-2e2276d561ef" Feb 17 16:22:50 crc kubenswrapper[4874]: E0217 16:22:50.594240 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:22:50 crc kubenswrapper[4874]: E0217 16:22:50.594478 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8b8kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-56kpq_openstack(f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:22:50 crc kubenswrapper[4874]: E0217 16:22:50.595896 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" podUID="f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096" Feb 17 16:22:50 crc kubenswrapper[4874]: E0217 16:22:50.626812 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:22:50 crc kubenswrapper[4874]: E0217 16:22:50.627468 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7587,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-ktlkv_openstack(9d38985d-0256-4859-9067-9ab1a3af1055): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:22:50 crc kubenswrapper[4874]: E0217 16:22:50.629691 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" podUID="9d38985d-0256-4859-9067-9ab1a3af1055" Feb 17 16:22:50 crc kubenswrapper[4874]: I0217 16:22:50.938611 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.048533 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q8w9\" (UniqueName: \"kubernetes.io/projected/e9b427a5-a55c-4cf4-a887-57afebb7b570-kube-api-access-9q8w9\") pod \"e9b427a5-a55c-4cf4-a887-57afebb7b570\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.048728 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-config\") pod \"e9b427a5-a55c-4cf4-a887-57afebb7b570\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.048767 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-dns-svc\") pod \"e9b427a5-a55c-4cf4-a887-57afebb7b570\" (UID: \"e9b427a5-a55c-4cf4-a887-57afebb7b570\") " Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.049805 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-config" (OuterVolumeSpecName: "config") pod "e9b427a5-a55c-4cf4-a887-57afebb7b570" (UID: "e9b427a5-a55c-4cf4-a887-57afebb7b570"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.050385 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.050515 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9b427a5-a55c-4cf4-a887-57afebb7b570" (UID: "e9b427a5-a55c-4cf4-a887-57afebb7b570"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.055617 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b427a5-a55c-4cf4-a887-57afebb7b570-kube-api-access-9q8w9" (OuterVolumeSpecName: "kube-api-access-9q8w9") pod "e9b427a5-a55c-4cf4-a887-57afebb7b570" (UID: "e9b427a5-a55c-4cf4-a887-57afebb7b570"). InnerVolumeSpecName "kube-api-access-9q8w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.151968 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q8w9\" (UniqueName: \"kubernetes.io/projected/e9b427a5-a55c-4cf4-a887-57afebb7b570-kube-api-access-9q8w9\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.152000 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9b427a5-a55c-4cf4-a887-57afebb7b570-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.165025 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f45468f6f-22lbn"] Feb 17 16:22:51 crc kubenswrapper[4874]: W0217 16:22:51.168296 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962f211e_4cdc_4a53_bc16_3a84d18e69b3.slice/crio-7d61f09920a7fee1cd33d2439d2ac04144366f1151460b25ee654599ec023eb2 WatchSource:0}: Error finding container 7d61f09920a7fee1cd33d2439d2ac04144366f1151460b25ee654599ec023eb2: Status 404 returned error can't find the container with id 7d61f09920a7fee1cd33d2439d2ac04144366f1151460b25ee654599ec023eb2 Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.333644 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.520455 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47"] Feb 17 16:22:51 crc kubenswrapper[4874]: W0217 16:22:51.527090 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4771c857_23aa_4647_a63d_d7a1977ffaa4.slice/crio-ccea2f3f2914968bfd822fae00c2bb0fcd07cf18265a6da694cfd93f2c5d1bc2 WatchSource:0}: Error finding container ccea2f3f2914968bfd822fae00c2bb0fcd07cf18265a6da694cfd93f2c5d1bc2: Status 404 returned error can't find the container with id ccea2f3f2914968bfd822fae00c2bb0fcd07cf18265a6da694cfd93f2c5d1bc2 Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.545249 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f45468f6f-22lbn" event={"ID":"962f211e-4cdc-4a53-bc16-3a84d18e69b3","Type":"ContainerStarted","Data":"da87b40efcf1bc2e240b778c38c34780bd76d06a20a35d7494f780d524332787"} Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.545287 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f45468f6f-22lbn" event={"ID":"962f211e-4cdc-4a53-bc16-3a84d18e69b3","Type":"ContainerStarted","Data":"7d61f09920a7fee1cd33d2439d2ac04144366f1151460b25ee654599ec023eb2"} Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.547042 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" event={"ID":"e9b427a5-a55c-4cf4-a887-57afebb7b570","Type":"ContainerDied","Data":"b3c8ec63a6f3cfdfd6ae54de8a8aa89a889ddf7d162a4ba88e1a0bb74e01f930"} Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.547154 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ttkd7" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.552298 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.554810 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerStarted","Data":"39c6ca5de943e2cccd7136cfbea872d2d6363551e0ff88c1bea174cfeb8db85c"} Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.556559 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" event={"ID":"4771c857-23aa-4647-a63d-d7a1977ffaa4","Type":"ContainerStarted","Data":"ccea2f3f2914968bfd822fae00c2bb0fcd07cf18265a6da694cfd93f2c5d1bc2"} Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.559541 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9535b3e4-e580-4939-9f0f-f57e7b3946c6","Type":"ContainerStarted","Data":"fbfd2617c560ae9bd62290ac1e0d3028eb60c7a5ec227c02d02db04a3761f615"} Feb 17 16:22:51 crc kubenswrapper[4874]: E0217 16:22:51.564647 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" podUID="9d38985d-0256-4859-9067-9ab1a3af1055" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.572880 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f45468f6f-22lbn" podStartSLOduration=22.572859883 podStartE2EDuration="22.572859883s" podCreationTimestamp="2026-02-17 16:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:51.567148152 +0000 UTC m=+1181.861536723" watchObservedRunningTime="2026-02-17 16:22:51.572859883 +0000 UTC m=+1181.867248444" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.748236 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ttkd7"] Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.762492 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ttkd7"] Feb 17 16:22:51 crc kubenswrapper[4874]: W0217 16:22:51.767297 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2b366ca_9778_45e9_8d34_5708857a85cc.slice/crio-8869ede9fe79a9c59d22633c65e51933dab2aaf7b887a003ed8bd9d728acb0c0 WatchSource:0}: Error finding container 8869ede9fe79a9c59d22633c65e51933dab2aaf7b887a003ed8bd9d728acb0c0: Status 404 returned error can't find the container with id 8869ede9fe79a9c59d22633c65e51933dab2aaf7b887a003ed8bd9d728acb0c0 Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.770242 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tpgc2"] Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.784376 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.928649 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.988436 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-config\") pod \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\" (UID: \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\") " Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.988603 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b8kg\" (UniqueName: \"kubernetes.io/projected/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-kube-api-access-8b8kg\") pod \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\" (UID: \"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096\") " Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.988949 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-config" (OuterVolumeSpecName: "config") pod "f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096" (UID: "f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:51 crc kubenswrapper[4874]: I0217 16:22:51.989604 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.039178 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-kube-api-access-8b8kg" (OuterVolumeSpecName: "kube-api-access-8b8kg") pod "f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096" (UID: "f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096"). InnerVolumeSpecName "kube-api-access-8b8kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.091223 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b8kg\" (UniqueName: \"kubernetes.io/projected/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096-kube-api-access-8b8kg\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.338461 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:22:52 crc kubenswrapper[4874]: W0217 16:22:52.339318 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf95c3b85_c546_47d7_9b75_7577455ab464.slice/crio-9ff79fcfd721e0864b3f05b2b4d02a7dc1a69f82db0915535ff90ebe04343ea3 WatchSource:0}: Error finding container 9ff79fcfd721e0864b3f05b2b4d02a7dc1a69f82db0915535ff90ebe04343ea3: Status 404 returned error can't find the container with id 9ff79fcfd721e0864b3f05b2b4d02a7dc1a69f82db0915535ff90ebe04343ea3 Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.433184 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pzc25"] Feb 17 16:22:52 crc kubenswrapper[4874]: W0217 16:22:52.451595 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80cf5dc3_e4d1_4d7c_b598_36a083080a66.slice/crio-22a2b0f43bd4aaeed746b2cf863fa259d2f8f2c207cefd515b438fe4e489c291 WatchSource:0}: Error finding container 22a2b0f43bd4aaeed746b2cf863fa259d2f8f2c207cefd515b438fe4e489c291: Status 404 returned error can't find the container with id 22a2b0f43bd4aaeed746b2cf863fa259d2f8f2c207cefd515b438fe4e489c291 Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.473151 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b427a5-a55c-4cf4-a887-57afebb7b570" path="/var/lib/kubelet/pods/e9b427a5-a55c-4cf4-a887-57afebb7b570/volumes" Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.568767 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37707c24-e133-484d-955f-57a20ec147b1","Type":"ContainerStarted","Data":"1eb6dabf17b342d2327164ae121cc80c313bb12e86bc551602ad09c3ceea3b65"} Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.570549 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c2b366ca-9778-45e9-8d34-5708857a85cc","Type":"ContainerStarted","Data":"8869ede9fe79a9c59d22633c65e51933dab2aaf7b887a003ed8bd9d728acb0c0"} Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.572834 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e1154a55-d86f-4c56-82d4-4d63c35feceb","Type":"ContainerStarted","Data":"58cf84eae3160566daf9887139d16777e230e2d4df3e92982647837fc586762a"} Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.578432 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" event={"ID":"f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096","Type":"ContainerDied","Data":"69b59afd156c7ceb1e929560d5004fca6b52b8835678cb85fde6193e0c910ade"} Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.578512 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-56kpq" Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.580989 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c99a20bb-50d6-4806-ac2a-2e2276d561ef","Type":"ContainerStarted","Data":"a2089d7a993cb279d8f5af5524dc878f22bdb6d2b4c39f6bf909a6cde06144c9"} Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.583529 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f95c3b85-c546-47d7-9b75-7577455ab464","Type":"ContainerStarted","Data":"9ff79fcfd721e0864b3f05b2b4d02a7dc1a69f82db0915535ff90ebe04343ea3"} Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.586391 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tpgc2" event={"ID":"4132e8e3-7498-4df0-9d6d-2dd7c096218a","Type":"ContainerStarted","Data":"38e7f05fe3f625d50d182e86c873e3f248edde7755825dcb8fbf0e12380a3199"} Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.594942 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pzc25" event={"ID":"80cf5dc3-e4d1-4d7c-b598-36a083080a66","Type":"ContainerStarted","Data":"22a2b0f43bd4aaeed746b2cf863fa259d2f8f2c207cefd515b438fe4e489c291"} Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.672913 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-56kpq"] Feb 17 16:22:52 crc kubenswrapper[4874]: I0217 16:22:52.672956 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-56kpq"] Feb 17 16:22:54 crc kubenswrapper[4874]: I0217 16:22:54.470458 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096" path="/var/lib/kubelet/pods/f91e0920-f0d9-4c7d-8d9e-3af9bf0f2096/volumes" Feb 17 16:22:55 crc kubenswrapper[4874]: I0217 16:22:55.630645 4874 generic.go:334] "Generic (PLEG): container finished" podID="9535b3e4-e580-4939-9f0f-f57e7b3946c6" containerID="fbfd2617c560ae9bd62290ac1e0d3028eb60c7a5ec227c02d02db04a3761f615" exitCode=0 Feb 17 16:22:55 crc kubenswrapper[4874]: I0217 16:22:55.631096 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9535b3e4-e580-4939-9f0f-f57e7b3946c6","Type":"ContainerDied","Data":"fbfd2617c560ae9bd62290ac1e0d3028eb60c7a5ec227c02d02db04a3761f615"} Feb 17 16:22:56 crc kubenswrapper[4874]: I0217 16:22:56.646756 4874 generic.go:334] "Generic (PLEG): container finished" podID="c99a20bb-50d6-4806-ac2a-2e2276d561ef" containerID="a2089d7a993cb279d8f5af5524dc878f22bdb6d2b4c39f6bf909a6cde06144c9" exitCode=0 Feb 17 16:22:56 crc kubenswrapper[4874]: I0217 16:22:56.646797 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c99a20bb-50d6-4806-ac2a-2e2276d561ef","Type":"ContainerDied","Data":"a2089d7a993cb279d8f5af5524dc878f22bdb6d2b4c39f6bf909a6cde06144c9"} Feb 17 16:22:57 crc kubenswrapper[4874]: I0217 16:22:57.724520 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:22:57 crc kubenswrapper[4874]: I0217 16:22:57.724811 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.698792 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f95c3b85-c546-47d7-9b75-7577455ab464","Type":"ContainerStarted","Data":"3a0462c2496b3ecf41208e2e554b0e9c46069aa15e21e6f08cd19b93f2c3db7c"} Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.702471 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tpgc2" event={"ID":"4132e8e3-7498-4df0-9d6d-2dd7c096218a","Type":"ContainerStarted","Data":"1c9295ee073f81b708d5f4c615c41369b257440907bf97b4b6f737338649e6da"} Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.704982 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-tpgc2" Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.732760 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" event={"ID":"4771c857-23aa-4647-a63d-d7a1977ffaa4","Type":"ContainerStarted","Data":"9085baef7d7d2ee39537868d9432efb64978af2c3bc1d34e83c58ff366765e66"} Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.755824 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-tpgc2" podStartSLOduration=22.120507057 podStartE2EDuration="28.755799365s" podCreationTimestamp="2026-02-17 16:22:31 +0000 UTC" firstStartedPulling="2026-02-17 16:22:51.840457748 +0000 UTC m=+1182.134846309" lastFinishedPulling="2026-02-17 16:22:58.475750046 +0000 UTC m=+1188.770138617" observedRunningTime="2026-02-17 16:22:59.729439853 +0000 UTC m=+1190.023828444" watchObservedRunningTime="2026-02-17 16:22:59.755799365 +0000 UTC m=+1190.050187926" Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.764345 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mpn47" podStartSLOduration=24.434274556 podStartE2EDuration="30.764320196s" podCreationTimestamp="2026-02-17 16:22:29 +0000 UTC" firstStartedPulling="2026-02-17 16:22:51.530700251 +0000 UTC m=+1181.825088822" lastFinishedPulling="2026-02-17 16:22:57.860745901 +0000 UTC m=+1188.155134462" observedRunningTime="2026-02-17 16:22:59.761279441 +0000 UTC m=+1190.055668002" watchObservedRunningTime="2026-02-17 16:22:59.764320196 +0000 UTC m=+1190.058708767" Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.772832 4874 generic.go:334] "Generic (PLEG): container finished" podID="80cf5dc3-e4d1-4d7c-b598-36a083080a66" containerID="32d9a89bd15ffe4ebf20b955a1c5b4377d8c7fddee63e1341167af71a8963c49" exitCode=0 Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.772937 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pzc25" event={"ID":"80cf5dc3-e4d1-4d7c-b598-36a083080a66","Type":"ContainerDied","Data":"32d9a89bd15ffe4ebf20b955a1c5b4377d8c7fddee63e1341167af71a8963c49"} Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.816021 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c2b366ca-9778-45e9-8d34-5708857a85cc","Type":"ContainerStarted","Data":"9618ec1cd9c591842725f59a8f8dad9678d63faf48d5877617b5c05acad03bd2"} Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.850375 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e1154a55-d86f-4c56-82d4-4d63c35feceb","Type":"ContainerStarted","Data":"0d58ddd625d4c25c64df1ab80aced90db1da26189e8af0b91a8bc1eedb191b60"} Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.852264 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.886038 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=24.599161622 podStartE2EDuration="31.886012859s" podCreationTimestamp="2026-02-17 16:22:28 +0000 UTC" firstStartedPulling="2026-02-17 16:22:51.566969058 +0000 UTC m=+1181.861357619" lastFinishedPulling="2026-02-17 16:22:58.853820295 +0000 UTC m=+1189.148208856" observedRunningTime="2026-02-17 16:22:59.88161296 +0000 UTC m=+1190.176001521" watchObservedRunningTime="2026-02-17 16:22:59.886012859 +0000 UTC m=+1190.180401430" Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.900090 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9535b3e4-e580-4939-9f0f-f57e7b3946c6","Type":"ContainerStarted","Data":"ec970b19bda3e986c2f60fdfda5cae858ea604f0b813859214b37dc65d7d7cca"} Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.912096 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c99a20bb-50d6-4806-ac2a-2e2276d561ef","Type":"ContainerStarted","Data":"721070efc1be3fd47157ce48d4fddb829ea4fe2c5c49e8e08ace56903657c47c"} Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.929473 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.10184194 podStartE2EDuration="35.929458494s" podCreationTimestamp="2026-02-17 16:22:24 +0000 UTC" firstStartedPulling="2026-02-17 16:22:33.852874188 +0000 UTC m=+1164.147262749" lastFinishedPulling="2026-02-17 16:22:50.680490742 +0000 UTC m=+1180.974879303" observedRunningTime="2026-02-17 16:22:59.922387929 +0000 UTC m=+1190.216776490" watchObservedRunningTime="2026-02-17 16:22:59.929458494 +0000 UTC m=+1190.223847055" Feb 17 16:22:59 crc kubenswrapper[4874]: I0217 16:22:59.960534 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371999.894266 podStartE2EDuration="36.960510253s" podCreationTimestamp="2026-02-17 16:22:23 +0000 UTC" firstStartedPulling="2026-02-17 16:22:25.700220447 +0000 UTC m=+1155.994608998" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:59.948064745 +0000 UTC m=+1190.242453316" watchObservedRunningTime="2026-02-17 16:22:59.960510253 +0000 UTC m=+1190.254898814" Feb 17 16:23:00 crc kubenswrapper[4874]: I0217 16:23:00.327433 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:23:00 crc kubenswrapper[4874]: I0217 16:23:00.328576 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:23:00 crc kubenswrapper[4874]: I0217 16:23:00.332616 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:23:00 crc kubenswrapper[4874]: I0217 16:23:00.921643 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"476813ee-f26a-4068-a5e9-87b5a20fece5","Type":"ContainerStarted","Data":"d0a20d9d2bae0c7e825b68fea651ef557c11736643886e7d9fc0aae9bd75ea87"} Feb 17 16:23:00 crc kubenswrapper[4874]: I0217 16:23:00.922908 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9093ae6e-39ee-47ca-b0d2-944be9ce4971","Type":"ContainerStarted","Data":"112e4ea30a2aaa7324418039ed94bb116c913c3e48aec3649594ba6a5dfd30f8"} Feb 17 16:23:00 crc kubenswrapper[4874]: I0217 16:23:00.923113 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 16:23:00 crc kubenswrapper[4874]: I0217 16:23:00.929717 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7eb994d5-6ecb-4a2d-bafc-86c9f107802c","Type":"ContainerStarted","Data":"a034db3c1ea552620fa0691a9a874a0d6c8f47608b6b427f485aa0e509c86b20"} Feb 17 16:23:00 crc kubenswrapper[4874]: I0217 16:23:00.934099 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pzc25" event={"ID":"80cf5dc3-e4d1-4d7c-b598-36a083080a66","Type":"ContainerStarted","Data":"841d93b2fb5cb79e526615e6988213c4104f8c18af099780bf31a3867844a0e4"} Feb 17 16:23:00 crc kubenswrapper[4874]: I0217 16:23:00.940605 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f45468f6f-22lbn" Feb 17 16:23:01 crc kubenswrapper[4874]: I0217 16:23:01.040762 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c65ff7679-2cmfs"] Feb 17 16:23:01 crc kubenswrapper[4874]: I0217 16:23:01.076599 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=8.932539047 podStartE2EDuration="35.076576854s" podCreationTimestamp="2026-02-17 16:22:26 +0000 UTC" firstStartedPulling="2026-02-17 16:22:33.85216573 +0000 UTC m=+1164.146554291" lastFinishedPulling="2026-02-17 16:22:59.996203537 +0000 UTC m=+1190.290592098" observedRunningTime="2026-02-17 16:23:01.04696441 +0000 UTC m=+1191.341352971" watchObservedRunningTime="2026-02-17 16:23:01.076576854 +0000 UTC m=+1191.370965415" Feb 17 16:23:01 crc kubenswrapper[4874]: I0217 16:23:01.950480 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerStarted","Data":"1a87819e09eaea64427f7e197d0a167a1165fd8d7a26e1ea28d3e7aa5a7ce4f6"} Feb 17 16:23:02 crc kubenswrapper[4874]: E0217 16:23:02.940328 4874 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.73:57014->38.102.83.73:34183: write tcp 38.102.83.73:57014->38.102.83.73:34183: write: broken pipe Feb 17 16:23:02 crc kubenswrapper[4874]: I0217 16:23:02.978026 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pzc25" event={"ID":"80cf5dc3-e4d1-4d7c-b598-36a083080a66","Type":"ContainerStarted","Data":"40d54461e5bd766cdeee3fffc43d59d94bb4f32ed88d28e4e708e238638dad15"} Feb 17 16:23:02 crc kubenswrapper[4874]: I0217 16:23:02.978567 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:23:02 crc kubenswrapper[4874]: I0217 16:23:02.978603 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:23:02 crc kubenswrapper[4874]: I0217 16:23:02.981979 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c2b366ca-9778-45e9-8d34-5708857a85cc","Type":"ContainerStarted","Data":"718854513e9c6f662f9b2a7899428bff61d6dc81df2c605e929307dc8c219f97"} Feb 17 16:23:02 crc kubenswrapper[4874]: I0217 16:23:02.984318 4874 generic.go:334] "Generic (PLEG): container finished" podID="5c1c05ac-d530-4e31-8b72-64e164aecf85" containerID="d10af8138fa63d60ee06e5868d8214e7cf66628cf2a96ae8c01059bd287b553a" exitCode=0 Feb 17 16:23:02 crc kubenswrapper[4874]: I0217 16:23:02.984411 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" event={"ID":"5c1c05ac-d530-4e31-8b72-64e164aecf85","Type":"ContainerDied","Data":"d10af8138fa63d60ee06e5868d8214e7cf66628cf2a96ae8c01059bd287b553a"} Feb 17 16:23:02 crc kubenswrapper[4874]: I0217 16:23:02.988156 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f95c3b85-c546-47d7-9b75-7577455ab464","Type":"ContainerStarted","Data":"c11b4488de91e81315dc09747fde306a28dd15c116248181cc1fdc91ade2c1f9"} Feb 17 16:23:03 crc kubenswrapper[4874]: I0217 16:23:03.024663 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pzc25" podStartSLOduration=25.989482529 podStartE2EDuration="32.0246446s" podCreationTimestamp="2026-02-17 16:22:31 +0000 UTC" firstStartedPulling="2026-02-17 16:22:52.454697645 +0000 UTC m=+1182.749086206" lastFinishedPulling="2026-02-17 16:22:58.489859696 +0000 UTC m=+1188.784248277" observedRunningTime="2026-02-17 16:23:03.008372868 +0000 UTC m=+1193.302761429" watchObservedRunningTime="2026-02-17 16:23:03.0246446 +0000 UTC m=+1193.319033161" Feb 17 16:23:03 crc kubenswrapper[4874]: I0217 16:23:03.101528 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.087687124 podStartE2EDuration="29.101497143s" podCreationTimestamp="2026-02-17 16:22:34 +0000 UTC" firstStartedPulling="2026-02-17 16:22:51.769366708 +0000 UTC m=+1182.063755269" lastFinishedPulling="2026-02-17 16:23:01.783176727 +0000 UTC m=+1192.077565288" observedRunningTime="2026-02-17 16:23:03.062158049 +0000 UTC m=+1193.356546630" watchObservedRunningTime="2026-02-17 16:23:03.101497143 +0000 UTC m=+1193.395885704" Feb 17 16:23:03 crc kubenswrapper[4874]: I0217 16:23:03.105669 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.70914233 podStartE2EDuration="32.105657256s" podCreationTimestamp="2026-02-17 16:22:31 +0000 UTC" firstStartedPulling="2026-02-17 16:22:52.346209139 +0000 UTC m=+1182.640597710" lastFinishedPulling="2026-02-17 16:23:01.742724075 +0000 UTC m=+1192.037112636" observedRunningTime="2026-02-17 16:23:03.084217295 +0000 UTC m=+1193.378605886" watchObservedRunningTime="2026-02-17 16:23:03.105657256 +0000 UTC m=+1193.400045817" Feb 17 16:23:04 crc kubenswrapper[4874]: I0217 16:23:04.002927 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ed7dc41e-9863-4c74-8675-56fca22db08a","Type":"ContainerStarted","Data":"b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c"} Feb 17 16:23:04 crc kubenswrapper[4874]: I0217 16:23:04.004728 4874 generic.go:334] "Generic (PLEG): container finished" podID="9d38985d-0256-4859-9067-9ab1a3af1055" containerID="298b15eda65381c793ee46ba486cab65bde03087024d9e0c678e4897a6f7cce2" exitCode=0 Feb 17 16:23:04 crc kubenswrapper[4874]: I0217 16:23:04.004859 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" event={"ID":"9d38985d-0256-4859-9067-9ab1a3af1055","Type":"ContainerDied","Data":"298b15eda65381c793ee46ba486cab65bde03087024d9e0c678e4897a6f7cce2"} Feb 17 16:23:04 crc kubenswrapper[4874]: I0217 16:23:04.008188 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" event={"ID":"5c1c05ac-d530-4e31-8b72-64e164aecf85","Type":"ContainerStarted","Data":"9424c73f428c0956b6561a3b66ce5376ee3777772930a472de635abd1d8e9bf0"} Feb 17 16:23:04 crc kubenswrapper[4874]: I0217 16:23:04.009635 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:23:04 crc kubenswrapper[4874]: I0217 16:23:04.073383 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" podStartSLOduration=3.6550858489999998 podStartE2EDuration="42.073355973s" podCreationTimestamp="2026-02-17 16:22:22 +0000 UTC" firstStartedPulling="2026-02-17 16:22:23.325579489 +0000 UTC m=+1153.619968050" lastFinishedPulling="2026-02-17 16:23:01.743849593 +0000 UTC m=+1192.038238174" observedRunningTime="2026-02-17 16:23:04.061713065 +0000 UTC m=+1194.356101666" watchObservedRunningTime="2026-02-17 16:23:04.073355973 +0000 UTC m=+1194.367744574" Feb 17 16:23:04 crc kubenswrapper[4874]: I0217 16:23:04.869165 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 16:23:04 crc kubenswrapper[4874]: I0217 16:23:04.869678 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 16:23:05 crc kubenswrapper[4874]: I0217 16:23:05.020407 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" event={"ID":"9d38985d-0256-4859-9067-9ab1a3af1055","Type":"ContainerStarted","Data":"8b671182d3e7e5ca45dc6fe4724c43aa4fc63fa69304ad3bb5c8fbfd6aaba5ef"} Feb 17 16:23:05 crc kubenswrapper[4874]: I0217 16:23:05.020958 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:23:05 crc kubenswrapper[4874]: I0217 16:23:05.047199 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" podStartSLOduration=-9223371992.8076 podStartE2EDuration="44.047175642s" podCreationTimestamp="2026-02-17 16:22:21 +0000 UTC" firstStartedPulling="2026-02-17 16:22:22.956236205 +0000 UTC m=+1153.250624766" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:05.042169578 +0000 UTC m=+1195.336558189" watchObservedRunningTime="2026-02-17 16:23:05.047175642 +0000 UTC m=+1195.341564223" Feb 17 16:23:05 crc kubenswrapper[4874]: I0217 16:23:05.601842 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 16:23:05 crc kubenswrapper[4874]: I0217 16:23:05.654116 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 16:23:05 crc kubenswrapper[4874]: I0217 16:23:05.884995 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 16:23:05 crc kubenswrapper[4874]: I0217 16:23:05.885037 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 16:23:05 crc kubenswrapper[4874]: I0217 16:23:05.946683 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.034899 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.074787 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.084895 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.261764 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ktlkv"] Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.295438 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-864zn"] Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.297034 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.299668 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.355247 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkwv\" (UniqueName: \"kubernetes.io/projected/2b9035e4-51ae-4ed9-a708-6285df982d94-kube-api-access-jhkwv\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.355371 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-config\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.355392 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.355437 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.383218 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rb8lr"] Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.384735 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.386503 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rb8lr"] Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.391256 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.422372 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-864zn"] Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.457548 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-ovs-rundir\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.458512 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.459093 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-combined-ca-bundle\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.459218 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djjz\" (UniqueName: \"kubernetes.io/projected/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-kube-api-access-2djjz\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.459291 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkwv\" (UniqueName: \"kubernetes.io/projected/2b9035e4-51ae-4ed9-a708-6285df982d94-kube-api-access-jhkwv\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.459484 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-ovn-rundir\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.459614 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-config\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.459694 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.459778 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.459869 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-config\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.461274 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-config\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.462084 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.462512 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.505834 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.505863 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.524243 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkwv\" (UniqueName: \"kubernetes.io/projected/2b9035e4-51ae-4ed9-a708-6285df982d94-kube-api-access-jhkwv\") pod \"dnsmasq-dns-6bc7876d45-864zn\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.541948 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.543403 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.562379 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-config\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.565095 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-ovs-rundir\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.565270 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.565448 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-combined-ca-bundle\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.565594 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2djjz\" (UniqueName: \"kubernetes.io/projected/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-kube-api-access-2djjz\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.565827 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-ovn-rundir\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.566281 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-ovn-rundir\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.566922 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-config\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.566945 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rz47"] Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.567293 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" podUID="5c1c05ac-d530-4e31-8b72-64e164aecf85" containerName="dnsmasq-dns" containerID="cri-o://9424c73f428c0956b6561a3b66ce5376ee3777772930a472de635abd1d8e9bf0" gracePeriod=10 Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.568245 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-ovs-rundir\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.570500 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-t5xg4" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.570739 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.570940 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.571266 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.599580 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.619016 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-combined-ca-bundle\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.629851 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.629878 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2djjz\" (UniqueName: \"kubernetes.io/projected/8a7189b3-10c5-4fe6-99c9-f3ec64fe159b-kube-api-access-2djjz\") pod \"ovn-controller-metrics-rb8lr\" (UID: \"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b\") " pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.631124 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.645542 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.705462 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48dbc25d-e454-452c-9912-f08d7569ecfa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.705550 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48dbc25d-e454-452c-9912-f08d7569ecfa-config\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.705576 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvhj\" (UniqueName: \"kubernetes.io/projected/48dbc25d-e454-452c-9912-f08d7569ecfa-kube-api-access-dkvhj\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.705598 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48dbc25d-e454-452c-9912-f08d7569ecfa-scripts\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.705623 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/48dbc25d-e454-452c-9912-f08d7569ecfa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.705660 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dbc25d-e454-452c-9912-f08d7569ecfa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.705797 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48dbc25d-e454-452c-9912-f08d7569ecfa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.730736 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rb8lr" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.811070 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48dbc25d-e454-452c-9912-f08d7569ecfa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.811853 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48dbc25d-e454-452c-9912-f08d7569ecfa-config\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.811945 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvhj\" (UniqueName: \"kubernetes.io/projected/48dbc25d-e454-452c-9912-f08d7569ecfa-kube-api-access-dkvhj\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.812020 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48dbc25d-e454-452c-9912-f08d7569ecfa-scripts\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.812117 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/48dbc25d-e454-452c-9912-f08d7569ecfa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.812305 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dbc25d-e454-452c-9912-f08d7569ecfa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.812521 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48dbc25d-e454-452c-9912-f08d7569ecfa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.814422 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48dbc25d-e454-452c-9912-f08d7569ecfa-config\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.815927 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48dbc25d-e454-452c-9912-f08d7569ecfa-scripts\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.815983 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48dbc25d-e454-452c-9912-f08d7569ecfa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.826792 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dbc25d-e454-452c-9912-f08d7569ecfa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.838138 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/48dbc25d-e454-452c-9912-f08d7569ecfa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.839885 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48dbc25d-e454-452c-9912-f08d7569ecfa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.847002 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-7jmdw"] Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.848739 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.859790 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.867854 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvhj\" (UniqueName: \"kubernetes.io/projected/48dbc25d-e454-452c-9912-f08d7569ecfa-kube-api-access-dkvhj\") pod \"ovn-northd-0\" (UID: \"48dbc25d-e454-452c-9912-f08d7569ecfa\") " pod="openstack/ovn-northd-0" Feb 17 16:23:06 crc kubenswrapper[4874]: I0217 16:23:06.888136 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7jmdw"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.024820 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvwj\" (UniqueName: \"kubernetes.io/projected/63c7af35-e957-4bca-ba65-13b706314f83-kube-api-access-hqvwj\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.025143 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-config\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.025177 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.025218 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.025236 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-dns-svc\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.057254 4874 generic.go:334] "Generic (PLEG): container finished" podID="5c1c05ac-d530-4e31-8b72-64e164aecf85" containerID="9424c73f428c0956b6561a3b66ce5376ee3777772930a472de635abd1d8e9bf0" exitCode=0 Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.057728 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" podUID="9d38985d-0256-4859-9067-9ab1a3af1055" containerName="dnsmasq-dns" containerID="cri-o://8b671182d3e7e5ca45dc6fe4724c43aa4fc63fa69304ad3bb5c8fbfd6aaba5ef" gracePeriod=10 Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.057812 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" event={"ID":"5c1c05ac-d530-4e31-8b72-64e164aecf85","Type":"ContainerDied","Data":"9424c73f428c0956b6561a3b66ce5376ee3777772930a472de635abd1d8e9bf0"} Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.060331 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.064869 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.128259 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvwj\" (UniqueName: \"kubernetes.io/projected/63c7af35-e957-4bca-ba65-13b706314f83-kube-api-access-hqvwj\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.128316 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-config\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.128359 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.128417 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.128445 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-dns-svc\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.129744 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-dns-svc\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.130442 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.132099 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.133269 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-config\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.151299 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvwj\" (UniqueName: \"kubernetes.io/projected/63c7af35-e957-4bca-ba65-13b706314f83-kube-api-access-hqvwj\") pod \"dnsmasq-dns-8554648995-7jmdw\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.195160 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.226798 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.267617 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.352671 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-864zn"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.396936 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.489885 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rb8lr"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.756191 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2582-account-create-update-h89p2"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.760280 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.762689 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.789290 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.792470 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2582-account-create-update-h89p2"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.814533 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.859130 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a736ae-9a4f-4803-ade8-2088a03e9b75-operator-scripts\") pod \"keystone-2582-account-create-update-h89p2\" (UID: \"16a736ae-9a4f-4803-ade8-2088a03e9b75\") " pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.859199 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jthl\" (UniqueName: \"kubernetes.io/projected/16a736ae-9a4f-4803-ade8-2088a03e9b75-kube-api-access-8jthl\") pod \"keystone-2582-account-create-update-h89p2\" (UID: \"16a736ae-9a4f-4803-ade8-2088a03e9b75\") " pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.862686 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rthxd"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.864069 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.872203 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rthxd"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.904900 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7jmdw"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.943242 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4j7m8"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.944643 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.964946 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk54g\" (UniqueName: \"kubernetes.io/projected/35de0e21-b2b6-482c-a5b0-01b20b85fd46-kube-api-access-mk54g\") pod \"keystone-db-create-rthxd\" (UID: \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\") " pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.965118 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a736ae-9a4f-4803-ade8-2088a03e9b75-operator-scripts\") pod \"keystone-2582-account-create-update-h89p2\" (UID: \"16a736ae-9a4f-4803-ade8-2088a03e9b75\") " pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.965191 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jthl\" (UniqueName: \"kubernetes.io/projected/16a736ae-9a4f-4803-ade8-2088a03e9b75-kube-api-access-8jthl\") pod \"keystone-2582-account-create-update-h89p2\" (UID: \"16a736ae-9a4f-4803-ade8-2088a03e9b75\") " pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.965273 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35de0e21-b2b6-482c-a5b0-01b20b85fd46-operator-scripts\") pod \"keystone-db-create-rthxd\" (UID: \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\") " pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.966315 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a736ae-9a4f-4803-ade8-2088a03e9b75-operator-scripts\") pod \"keystone-2582-account-create-update-h89p2\" (UID: \"16a736ae-9a4f-4803-ade8-2088a03e9b75\") " pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.974727 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4j7m8"] Feb 17 16:23:07 crc kubenswrapper[4874]: I0217 16:23:07.986881 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jthl\" (UniqueName: \"kubernetes.io/projected/16a736ae-9a4f-4803-ade8-2088a03e9b75-kube-api-access-8jthl\") pod \"keystone-2582-account-create-update-h89p2\" (UID: \"16a736ae-9a4f-4803-ade8-2088a03e9b75\") " pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:07.998868 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.039868 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f5c3-account-create-update-s4xs5"] Feb 17 16:23:08 crc kubenswrapper[4874]: E0217 16:23:08.040342 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1c05ac-d530-4e31-8b72-64e164aecf85" containerName="dnsmasq-dns" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.040364 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1c05ac-d530-4e31-8b72-64e164aecf85" containerName="dnsmasq-dns" Feb 17 16:23:08 crc kubenswrapper[4874]: E0217 16:23:08.040389 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1c05ac-d530-4e31-8b72-64e164aecf85" containerName="init" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.040396 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1c05ac-d530-4e31-8b72-64e164aecf85" containerName="init" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.040585 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1c05ac-d530-4e31-8b72-64e164aecf85" containerName="dnsmasq-dns" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.041326 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.044888 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.067775 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4lk\" (UniqueName: \"kubernetes.io/projected/5c1c05ac-d530-4e31-8b72-64e164aecf85-kube-api-access-7c4lk\") pod \"5c1c05ac-d530-4e31-8b72-64e164aecf85\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.068832 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-dns-svc\") pod \"5c1c05ac-d530-4e31-8b72-64e164aecf85\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.068908 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-config\") pod \"5c1c05ac-d530-4e31-8b72-64e164aecf85\" (UID: \"5c1c05ac-d530-4e31-8b72-64e164aecf85\") " Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.069189 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk54g\" (UniqueName: \"kubernetes.io/projected/35de0e21-b2b6-482c-a5b0-01b20b85fd46-kube-api-access-mk54g\") pod \"keystone-db-create-rthxd\" (UID: \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\") " pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.069330 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35de0e21-b2b6-482c-a5b0-01b20b85fd46-operator-scripts\") pod \"keystone-db-create-rthxd\" (UID: \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\") " pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.069397 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfdbn\" (UniqueName: \"kubernetes.io/projected/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-kube-api-access-dfdbn\") pod \"placement-db-create-4j7m8\" (UID: \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\") " pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.069420 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-operator-scripts\") pod \"placement-db-create-4j7m8\" (UID: \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\") " pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.071112 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35de0e21-b2b6-482c-a5b0-01b20b85fd46-operator-scripts\") pod \"keystone-db-create-rthxd\" (UID: \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\") " pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.080691 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1c05ac-d530-4e31-8b72-64e164aecf85-kube-api-access-7c4lk" (OuterVolumeSpecName: "kube-api-access-7c4lk") pod "5c1c05ac-d530-4e31-8b72-64e164aecf85" (UID: "5c1c05ac-d530-4e31-8b72-64e164aecf85"). InnerVolumeSpecName "kube-api-access-7c4lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.081934 4874 generic.go:334] "Generic (PLEG): container finished" podID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerID="1a87819e09eaea64427f7e197d0a167a1165fd8d7a26e1ea28d3e7aa5a7ce4f6" exitCode=0 Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.082004 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerDied","Data":"1a87819e09eaea64427f7e197d0a167a1165fd8d7a26e1ea28d3e7aa5a7ce4f6"} Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.101731 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk54g\" (UniqueName: \"kubernetes.io/projected/35de0e21-b2b6-482c-a5b0-01b20b85fd46-kube-api-access-mk54g\") pod \"keystone-db-create-rthxd\" (UID: \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\") " pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.103543 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7jmdw" event={"ID":"63c7af35-e957-4bca-ba65-13b706314f83","Type":"ContainerStarted","Data":"4c21e04a816d194f5eed42637c3d63588a06ac3d017a7693d836828ce8d64fee"} Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.104774 4874 generic.go:334] "Generic (PLEG): container finished" podID="2b9035e4-51ae-4ed9-a708-6285df982d94" containerID="1c6d5a9130ccdc551a2e87c92225e0674e76386ad38bd7e55dd62590207be40a" exitCode=0 Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.104850 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" event={"ID":"2b9035e4-51ae-4ed9-a708-6285df982d94","Type":"ContainerDied","Data":"1c6d5a9130ccdc551a2e87c92225e0674e76386ad38bd7e55dd62590207be40a"} Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.104876 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" event={"ID":"2b9035e4-51ae-4ed9-a708-6285df982d94","Type":"ContainerStarted","Data":"84071db6f66565ce0ecb4cdf3b7942f0eb7ab1fd1af9dc0c4ed3c1a097a0fffd"} Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.109969 4874 generic.go:334] "Generic (PLEG): container finished" podID="9d38985d-0256-4859-9067-9ab1a3af1055" containerID="8b671182d3e7e5ca45dc6fe4724c43aa4fc63fa69304ad3bb5c8fbfd6aaba5ef" exitCode=0 Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.110267 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" event={"ID":"9d38985d-0256-4859-9067-9ab1a3af1055","Type":"ContainerDied","Data":"8b671182d3e7e5ca45dc6fe4724c43aa4fc63fa69304ad3bb5c8fbfd6aaba5ef"} Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.112690 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"48dbc25d-e454-452c-9912-f08d7569ecfa","Type":"ContainerStarted","Data":"0f852955b3577dae0e0b1d01a2d968b79363524daa9741066f12a5948a1efcfe"} Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.114904 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" event={"ID":"5c1c05ac-d530-4e31-8b72-64e164aecf85","Type":"ContainerDied","Data":"cabe6884ea10bb7ab7fd7c195af7268350dec27bde0e0b7527fe457652d0a97d"} Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.114946 4874 scope.go:117] "RemoveContainer" containerID="9424c73f428c0956b6561a3b66ce5376ee3777772930a472de635abd1d8e9bf0" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.115087 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4rz47" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.126303 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f5c3-account-create-update-s4xs5"] Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.126638 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rb8lr" event={"ID":"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b","Type":"ContainerStarted","Data":"744c67fe6dcdc175232e542d9dd5c24efa16b5f6343b86ca095583124668248a"} Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.126672 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rb8lr" event={"ID":"8a7189b3-10c5-4fe6-99c9-f3ec64fe159b","Type":"ContainerStarted","Data":"52759563753f363ed92b8913ce79a47a1a2043afa0c144fedcbddfb229fdb795"} Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.145565 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c1c05ac-d530-4e31-8b72-64e164aecf85" (UID: "5c1c05ac-d530-4e31-8b72-64e164aecf85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.152528 4874 scope.go:117] "RemoveContainer" containerID="d10af8138fa63d60ee06e5868d8214e7cf66628cf2a96ae8c01059bd287b553a" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.162335 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-config" (OuterVolumeSpecName: "config") pod "5c1c05ac-d530-4e31-8b72-64e164aecf85" (UID: "5c1c05ac-d530-4e31-8b72-64e164aecf85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.172442 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e5efee-d739-4300-bc49-181df5481246-operator-scripts\") pod \"placement-f5c3-account-create-update-s4xs5\" (UID: \"82e5efee-d739-4300-bc49-181df5481246\") " pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.172514 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpnhd\" (UniqueName: \"kubernetes.io/projected/82e5efee-d739-4300-bc49-181df5481246-kube-api-access-rpnhd\") pod \"placement-f5c3-account-create-update-s4xs5\" (UID: \"82e5efee-d739-4300-bc49-181df5481246\") " pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.172548 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfdbn\" (UniqueName: \"kubernetes.io/projected/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-kube-api-access-dfdbn\") pod \"placement-db-create-4j7m8\" (UID: \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\") " pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.172576 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-operator-scripts\") pod \"placement-db-create-4j7m8\" (UID: \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\") " pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.172724 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4lk\" (UniqueName: \"kubernetes.io/projected/5c1c05ac-d530-4e31-8b72-64e164aecf85-kube-api-access-7c4lk\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.172741 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.172749 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c1c05ac-d530-4e31-8b72-64e164aecf85-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.175815 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-operator-scripts\") pod \"placement-db-create-4j7m8\" (UID: \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\") " pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.189539 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.193912 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfdbn\" (UniqueName: \"kubernetes.io/projected/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-kube-api-access-dfdbn\") pod \"placement-db-create-4j7m8\" (UID: \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\") " pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.248749 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rb8lr" podStartSLOduration=2.248726861 podStartE2EDuration="2.248726861s" podCreationTimestamp="2026-02-17 16:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:08.183262931 +0000 UTC m=+1198.477651482" watchObservedRunningTime="2026-02-17 16:23:08.248726861 +0000 UTC m=+1198.543115422" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.273926 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7587\" (UniqueName: \"kubernetes.io/projected/9d38985d-0256-4859-9067-9ab1a3af1055-kube-api-access-j7587\") pod \"9d38985d-0256-4859-9067-9ab1a3af1055\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.274018 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-config\") pod \"9d38985d-0256-4859-9067-9ab1a3af1055\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.274143 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-dns-svc\") pod \"9d38985d-0256-4859-9067-9ab1a3af1055\" (UID: \"9d38985d-0256-4859-9067-9ab1a3af1055\") " Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.274513 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e5efee-d739-4300-bc49-181df5481246-operator-scripts\") pod \"placement-f5c3-account-create-update-s4xs5\" (UID: \"82e5efee-d739-4300-bc49-181df5481246\") " pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.274566 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpnhd\" (UniqueName: \"kubernetes.io/projected/82e5efee-d739-4300-bc49-181df5481246-kube-api-access-rpnhd\") pod \"placement-f5c3-account-create-update-s4xs5\" (UID: \"82e5efee-d739-4300-bc49-181df5481246\") " pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.277390 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e5efee-d739-4300-bc49-181df5481246-operator-scripts\") pod \"placement-f5c3-account-create-update-s4xs5\" (UID: \"82e5efee-d739-4300-bc49-181df5481246\") " pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.278923 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d38985d-0256-4859-9067-9ab1a3af1055-kube-api-access-j7587" (OuterVolumeSpecName: "kube-api-access-j7587") pod "9d38985d-0256-4859-9067-9ab1a3af1055" (UID: "9d38985d-0256-4859-9067-9ab1a3af1055"). InnerVolumeSpecName "kube-api-access-j7587". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.286498 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.295970 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpnhd\" (UniqueName: \"kubernetes.io/projected/82e5efee-d739-4300-bc49-181df5481246-kube-api-access-rpnhd\") pod \"placement-f5c3-account-create-update-s4xs5\" (UID: \"82e5efee-d739-4300-bc49-181df5481246\") " pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.299231 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.330998 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d38985d-0256-4859-9067-9ab1a3af1055" (UID: "9d38985d-0256-4859-9067-9ab1a3af1055"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.343823 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.349679 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-config" (OuterVolumeSpecName: "config") pod "9d38985d-0256-4859-9067-9ab1a3af1055" (UID: "9d38985d-0256-4859-9067-9ab1a3af1055"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.368233 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.376814 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.376843 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7587\" (UniqueName: \"kubernetes.io/projected/9d38985d-0256-4859-9067-9ab1a3af1055-kube-api-access-j7587\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.376853 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d38985d-0256-4859-9067-9ab1a3af1055-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.506686 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rz47"] Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.519443 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rz47"] Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.958088 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rthxd"] Feb 17 16:23:08 crc kubenswrapper[4874]: I0217 16:23:08.965815 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2582-account-create-update-h89p2"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.122295 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4j7m8"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.135448 4874 generic.go:334] "Generic (PLEG): container finished" podID="63c7af35-e957-4bca-ba65-13b706314f83" containerID="8b759575b27b11a6c1e889a7929655c2edfd58ea36e0445dc822705e16690887" exitCode=0 Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.135496 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7jmdw" event={"ID":"63c7af35-e957-4bca-ba65-13b706314f83","Type":"ContainerDied","Data":"8b759575b27b11a6c1e889a7929655c2edfd58ea36e0445dc822705e16690887"} Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.138970 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" event={"ID":"2b9035e4-51ae-4ed9-a708-6285df982d94","Type":"ContainerStarted","Data":"e2739fc473179afe6350a31fd1025e8c367aaa111086ce1e346c86e8c71b9d8e"} Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.139877 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.142095 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f5c3-account-create-update-s4xs5"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.149957 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" event={"ID":"9d38985d-0256-4859-9067-9ab1a3af1055","Type":"ContainerDied","Data":"4f48340f8cfeb744c425ebf8297fa3c5da11c09234782016b1676e6c90ffb5fb"} Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.150022 4874 scope.go:117] "RemoveContainer" containerID="8b671182d3e7e5ca45dc6fe4724c43aa4fc63fa69304ad3bb5c8fbfd6aaba5ef" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.150150 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ktlkv" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.208433 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-q8x4r"] Feb 17 16:23:09 crc kubenswrapper[4874]: E0217 16:23:09.208879 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d38985d-0256-4859-9067-9ab1a3af1055" containerName="init" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.208894 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d38985d-0256-4859-9067-9ab1a3af1055" containerName="init" Feb 17 16:23:09 crc kubenswrapper[4874]: E0217 16:23:09.208941 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d38985d-0256-4859-9067-9ab1a3af1055" containerName="dnsmasq-dns" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.208949 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d38985d-0256-4859-9067-9ab1a3af1055" containerName="dnsmasq-dns" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.209137 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d38985d-0256-4859-9067-9ab1a3af1055" containerName="dnsmasq-dns" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.209825 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.220511 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-q8x4r"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.223069 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" podStartSLOduration=3.223049992 podStartE2EDuration="3.223049992s" podCreationTimestamp="2026-02-17 16:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:09.210883351 +0000 UTC m=+1199.505271912" watchObservedRunningTime="2026-02-17 16:23:09.223049992 +0000 UTC m=+1199.517438563" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.251521 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-1f5a-account-create-update-c9qms"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.252975 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.267035 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.274201 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ktlkv"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.300952 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ktlkv"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.306997 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af707444-663f-458c-a1a2-88d51f97bc68-operator-scripts\") pod \"mysqld-exporter-1f5a-account-create-update-c9qms\" (UID: \"af707444-663f-458c-a1a2-88d51f97bc68\") " pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.307112 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzrh\" (UniqueName: \"kubernetes.io/projected/af707444-663f-458c-a1a2-88d51f97bc68-kube-api-access-tnzrh\") pod \"mysqld-exporter-1f5a-account-create-update-c9qms\" (UID: \"af707444-663f-458c-a1a2-88d51f97bc68\") " pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.307200 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vnpt\" (UniqueName: \"kubernetes.io/projected/7a138fbf-e69e-4981-a7f0-b399fbbb7088-kube-api-access-9vnpt\") pod \"mysqld-exporter-openstack-db-create-q8x4r\" (UID: \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\") " pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.307248 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a138fbf-e69e-4981-a7f0-b399fbbb7088-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-q8x4r\" (UID: \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\") " pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.356138 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1f5a-account-create-update-c9qms"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.411642 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a138fbf-e69e-4981-a7f0-b399fbbb7088-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-q8x4r\" (UID: \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\") " pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.411746 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af707444-663f-458c-a1a2-88d51f97bc68-operator-scripts\") pod \"mysqld-exporter-1f5a-account-create-update-c9qms\" (UID: \"af707444-663f-458c-a1a2-88d51f97bc68\") " pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.411846 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnzrh\" (UniqueName: \"kubernetes.io/projected/af707444-663f-458c-a1a2-88d51f97bc68-kube-api-access-tnzrh\") pod \"mysqld-exporter-1f5a-account-create-update-c9qms\" (UID: \"af707444-663f-458c-a1a2-88d51f97bc68\") " pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.411954 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vnpt\" (UniqueName: \"kubernetes.io/projected/7a138fbf-e69e-4981-a7f0-b399fbbb7088-kube-api-access-9vnpt\") pod \"mysqld-exporter-openstack-db-create-q8x4r\" (UID: \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\") " pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.413015 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a138fbf-e69e-4981-a7f0-b399fbbb7088-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-q8x4r\" (UID: \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\") " pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.413552 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af707444-663f-458c-a1a2-88d51f97bc68-operator-scripts\") pod \"mysqld-exporter-1f5a-account-create-update-c9qms\" (UID: \"af707444-663f-458c-a1a2-88d51f97bc68\") " pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.435561 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-864zn"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.440450 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.452318 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-spnx4"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.453922 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.455636 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnzrh\" (UniqueName: \"kubernetes.io/projected/af707444-663f-458c-a1a2-88d51f97bc68-kube-api-access-tnzrh\") pod \"mysqld-exporter-1f5a-account-create-update-c9qms\" (UID: \"af707444-663f-458c-a1a2-88d51f97bc68\") " pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.457536 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vnpt\" (UniqueName: \"kubernetes.io/projected/7a138fbf-e69e-4981-a7f0-b399fbbb7088-kube-api-access-9vnpt\") pod \"mysqld-exporter-openstack-db-create-q8x4r\" (UID: \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\") " pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.495837 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-spnx4"] Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.521690 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmgq\" (UniqueName: \"kubernetes.io/projected/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-kube-api-access-kwmgq\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.521751 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.521788 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.521817 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.521868 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-config\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.555421 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.619135 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.626033 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.626089 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.626120 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.626155 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-config\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.626277 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmgq\" (UniqueName: \"kubernetes.io/projected/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-kube-api-access-kwmgq\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.627800 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.629840 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.631203 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-config\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.635746 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.678849 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmgq\" (UniqueName: \"kubernetes.io/projected/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-kube-api-access-kwmgq\") pod \"dnsmasq-dns-b8fbc5445-spnx4\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:09 crc kubenswrapper[4874]: I0217 16:23:09.899703 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.473513 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c1c05ac-d530-4e31-8b72-64e164aecf85" path="/var/lib/kubelet/pods/5c1c05ac-d530-4e31-8b72-64e164aecf85/volumes" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.474392 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d38985d-0256-4859-9067-9ab1a3af1055" path="/var/lib/kubelet/pods/9d38985d-0256-4859-9067-9ab1a3af1055/volumes" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.608662 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.619277 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.622408 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.622459 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.622430 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.622855 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-v7h87" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.645358 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.649651 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.649754 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzwbt\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-kube-api-access-fzwbt\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.650162 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef4e67bf-5c81-4ffd-b8b6-8955a39b6550\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef4e67bf-5c81-4ffd-b8b6-8955a39b6550\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.650269 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.650298 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-cache\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.666290 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-lock\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.768517 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzwbt\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-kube-api-access-fzwbt\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.768625 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef4e67bf-5c81-4ffd-b8b6-8955a39b6550\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef4e67bf-5c81-4ffd-b8b6-8955a39b6550\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.768659 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.768674 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-cache\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.768739 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-lock\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.768801 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: E0217 16:23:10.768912 4874 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:23:10 crc kubenswrapper[4874]: E0217 16:23:10.768953 4874 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:23:10 crc kubenswrapper[4874]: E0217 16:23:10.769016 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift podName:7fda3013-2526-48c1-ba34-9e8d1bb33e9f nodeName:}" failed. No retries permitted until 2026-02-17 16:23:11.268994574 +0000 UTC m=+1201.563383145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift") pod "swift-storage-0" (UID: "7fda3013-2526-48c1-ba34-9e8d1bb33e9f") : configmap "swift-ring-files" not found Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.770434 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-cache\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.770561 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-lock\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.771317 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.771353 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef4e67bf-5c81-4ffd-b8b6-8955a39b6550\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef4e67bf-5c81-4ffd-b8b6-8955a39b6550\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a474f0426f9f3b8451d3e970ad2f8c00fbba551337062ae1b5a8996c9ef2eefc/globalmount\"" pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.773330 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.786492 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzwbt\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-kube-api-access-fzwbt\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:10 crc kubenswrapper[4874]: I0217 16:23:10.804963 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef4e67bf-5c81-4ffd-b8b6-8955a39b6550\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef4e67bf-5c81-4ffd-b8b6-8955a39b6550\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.172570 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" podUID="2b9035e4-51ae-4ed9-a708-6285df982d94" containerName="dnsmasq-dns" containerID="cri-o://e2739fc473179afe6350a31fd1025e8c367aaa111086ce1e346c86e8c71b9d8e" gracePeriod=10 Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.189883 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vj2t6"] Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.191194 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.193840 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.194120 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.194239 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.198787 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vj2t6"] Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.277965 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmww6\" (UniqueName: \"kubernetes.io/projected/99f3c575-721c-4e73-a4e3-e5497e1a3201-kube-api-access-zmww6\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.278013 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99f3c575-721c-4e73-a4e3-e5497e1a3201-etc-swift\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.278212 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.278260 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-dispersionconf\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.278292 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-swiftconf\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.278338 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-ring-data-devices\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.278379 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-combined-ca-bundle\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.278434 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-scripts\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: E0217 16:23:11.278792 4874 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:23:11 crc kubenswrapper[4874]: E0217 16:23:11.278812 4874 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:23:11 crc kubenswrapper[4874]: E0217 16:23:11.278850 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift podName:7fda3013-2526-48c1-ba34-9e8d1bb33e9f nodeName:}" failed. No retries permitted until 2026-02-17 16:23:12.278835746 +0000 UTC m=+1202.573224307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift") pod "swift-storage-0" (UID: "7fda3013-2526-48c1-ba34-9e8d1bb33e9f") : configmap "swift-ring-files" not found Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.379878 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-dispersionconf\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.379956 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-swiftconf\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.379988 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-ring-data-devices\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.380022 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-combined-ca-bundle\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.380059 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-scripts\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.380200 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmww6\" (UniqueName: \"kubernetes.io/projected/99f3c575-721c-4e73-a4e3-e5497e1a3201-kube-api-access-zmww6\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.380241 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99f3c575-721c-4e73-a4e3-e5497e1a3201-etc-swift\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.380979 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99f3c575-721c-4e73-a4e3-e5497e1a3201-etc-swift\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.381348 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-ring-data-devices\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.381390 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-scripts\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.384606 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-dispersionconf\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.390743 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-combined-ca-bundle\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.391206 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-swiftconf\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.403457 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmww6\" (UniqueName: \"kubernetes.io/projected/99f3c575-721c-4e73-a4e3-e5497e1a3201-kube-api-access-zmww6\") pod \"swift-ring-rebalance-vj2t6\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:11 crc kubenswrapper[4874]: I0217 16:23:11.526787 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:12 crc kubenswrapper[4874]: I0217 16:23:12.329048 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:12 crc kubenswrapper[4874]: E0217 16:23:12.329282 4874 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:23:12 crc kubenswrapper[4874]: E0217 16:23:12.329747 4874 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:23:12 crc kubenswrapper[4874]: E0217 16:23:12.329815 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift podName:7fda3013-2526-48c1-ba34-9e8d1bb33e9f nodeName:}" failed. No retries permitted until 2026-02-17 16:23:14.329794175 +0000 UTC m=+1204.624182736 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift") pod "swift-storage-0" (UID: "7fda3013-2526-48c1-ba34-9e8d1bb33e9f") : configmap "swift-ring-files" not found Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.448195 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6clxd"] Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.450262 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.453257 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.469930 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6clxd"] Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.555948 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-operator-scripts\") pod \"root-account-create-update-6clxd\" (UID: \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\") " pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.556170 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlkx4\" (UniqueName: \"kubernetes.io/projected/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-kube-api-access-wlkx4\") pod \"root-account-create-update-6clxd\" (UID: \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\") " pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:13 crc kubenswrapper[4874]: W0217 16:23:13.610241 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82e5efee_d739_4300_bc49_181df5481246.slice/crio-9b30661c715891a6c07c0b63d086f84f107295091c1dcbbf6fe8b5d2ff5a43a6 WatchSource:0}: Error finding container 9b30661c715891a6c07c0b63d086f84f107295091c1dcbbf6fe8b5d2ff5a43a6: Status 404 returned error can't find the container with id 9b30661c715891a6c07c0b63d086f84f107295091c1dcbbf6fe8b5d2ff5a43a6 Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.657735 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlkx4\" (UniqueName: \"kubernetes.io/projected/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-kube-api-access-wlkx4\") pod \"root-account-create-update-6clxd\" (UID: \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\") " pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.657946 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-operator-scripts\") pod \"root-account-create-update-6clxd\" (UID: \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\") " pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.658804 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-operator-scripts\") pod \"root-account-create-update-6clxd\" (UID: \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\") " pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.679050 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlkx4\" (UniqueName: \"kubernetes.io/projected/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-kube-api-access-wlkx4\") pod \"root-account-create-update-6clxd\" (UID: \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\") " pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:13 crc kubenswrapper[4874]: I0217 16:23:13.774134 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:14 crc kubenswrapper[4874]: I0217 16:23:14.205715 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rthxd" event={"ID":"35de0e21-b2b6-482c-a5b0-01b20b85fd46","Type":"ContainerStarted","Data":"fd0235bbb0e904153bd22abdf577fdc31c79ec75e276e346a2657834c11cdc97"} Feb 17 16:23:14 crc kubenswrapper[4874]: I0217 16:23:14.207014 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4j7m8" event={"ID":"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3","Type":"ContainerStarted","Data":"8030cfb4d3f682da1605af8af08153f522d33d771bd16102a0d95eed8f87da79"} Feb 17 16:23:14 crc kubenswrapper[4874]: I0217 16:23:14.208531 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5c3-account-create-update-s4xs5" event={"ID":"82e5efee-d739-4300-bc49-181df5481246","Type":"ContainerStarted","Data":"9b30661c715891a6c07c0b63d086f84f107295091c1dcbbf6fe8b5d2ff5a43a6"} Feb 17 16:23:14 crc kubenswrapper[4874]: I0217 16:23:14.210507 4874 generic.go:334] "Generic (PLEG): container finished" podID="2b9035e4-51ae-4ed9-a708-6285df982d94" containerID="e2739fc473179afe6350a31fd1025e8c367aaa111086ce1e346c86e8c71b9d8e" exitCode=0 Feb 17 16:23:14 crc kubenswrapper[4874]: I0217 16:23:14.210593 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" event={"ID":"2b9035e4-51ae-4ed9-a708-6285df982d94","Type":"ContainerDied","Data":"e2739fc473179afe6350a31fd1025e8c367aaa111086ce1e346c86e8c71b9d8e"} Feb 17 16:23:14 crc kubenswrapper[4874]: I0217 16:23:14.211748 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2582-account-create-update-h89p2" event={"ID":"16a736ae-9a4f-4803-ade8-2088a03e9b75","Type":"ContainerStarted","Data":"a7b5ea7002d498700adecf2852cf67ba7212d2f263d5593de2cca7ddf1df26a3"} Feb 17 16:23:14 crc kubenswrapper[4874]: I0217 16:23:14.376029 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:14 crc kubenswrapper[4874]: E0217 16:23:14.376176 4874 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:23:14 crc kubenswrapper[4874]: E0217 16:23:14.376206 4874 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:23:14 crc kubenswrapper[4874]: E0217 16:23:14.376255 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift podName:7fda3013-2526-48c1-ba34-9e8d1bb33e9f nodeName:}" failed. No retries permitted until 2026-02-17 16:23:18.376241407 +0000 UTC m=+1208.670629968 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift") pod "swift-storage-0" (UID: "7fda3013-2526-48c1-ba34-9e8d1bb33e9f") : configmap "swift-ring-files" not found Feb 17 16:23:16 crc kubenswrapper[4874]: I0217 16:23:16.997672 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gztng"] Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.001329 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gztng" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.012932 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gztng"] Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.108021 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-fcfd-account-create-update-gzpln"] Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.109697 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.111957 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.116898 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fcfd-account-create-update-gzpln"] Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.139616 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw4l7\" (UniqueName: \"kubernetes.io/projected/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-kube-api-access-tw4l7\") pod \"glance-db-create-gztng\" (UID: \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\") " pod="openstack/glance-db-create-gztng" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.139727 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-operator-scripts\") pod \"glance-db-create-gztng\" (UID: \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\") " pod="openstack/glance-db-create-gztng" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.241985 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39726753-57c2-4de7-91a2-c0f60e799ea9-operator-scripts\") pod \"glance-fcfd-account-create-update-gzpln\" (UID: \"39726753-57c2-4de7-91a2-c0f60e799ea9\") " pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.242474 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw4l7\" (UniqueName: \"kubernetes.io/projected/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-kube-api-access-tw4l7\") pod \"glance-db-create-gztng\" (UID: \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\") " pod="openstack/glance-db-create-gztng" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.242549 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-operator-scripts\") pod \"glance-db-create-gztng\" (UID: \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\") " pod="openstack/glance-db-create-gztng" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.242917 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2626m\" (UniqueName: \"kubernetes.io/projected/39726753-57c2-4de7-91a2-c0f60e799ea9-kube-api-access-2626m\") pod \"glance-fcfd-account-create-update-gzpln\" (UID: \"39726753-57c2-4de7-91a2-c0f60e799ea9\") " pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.244292 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-operator-scripts\") pod \"glance-db-create-gztng\" (UID: \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\") " pod="openstack/glance-db-create-gztng" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.270254 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw4l7\" (UniqueName: \"kubernetes.io/projected/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-kube-api-access-tw4l7\") pod \"glance-db-create-gztng\" (UID: \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\") " pod="openstack/glance-db-create-gztng" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.339183 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gztng" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.345622 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2626m\" (UniqueName: \"kubernetes.io/projected/39726753-57c2-4de7-91a2-c0f60e799ea9-kube-api-access-2626m\") pod \"glance-fcfd-account-create-update-gzpln\" (UID: \"39726753-57c2-4de7-91a2-c0f60e799ea9\") " pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.345878 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39726753-57c2-4de7-91a2-c0f60e799ea9-operator-scripts\") pod \"glance-fcfd-account-create-update-gzpln\" (UID: \"39726753-57c2-4de7-91a2-c0f60e799ea9\") " pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.346667 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39726753-57c2-4de7-91a2-c0f60e799ea9-operator-scripts\") pod \"glance-fcfd-account-create-update-gzpln\" (UID: \"39726753-57c2-4de7-91a2-c0f60e799ea9\") " pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.367109 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2626m\" (UniqueName: \"kubernetes.io/projected/39726753-57c2-4de7-91a2-c0f60e799ea9-kube-api-access-2626m\") pod \"glance-fcfd-account-create-update-gzpln\" (UID: \"39726753-57c2-4de7-91a2-c0f60e799ea9\") " pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.429045 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.553802 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.613417 4874 scope.go:117] "RemoveContainer" containerID="298b15eda65381c793ee46ba486cab65bde03087024d9e0c678e4897a6f7cce2" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.649354 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.651190 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-ovsdbserver-sb\") pod \"2b9035e4-51ae-4ed9-a708-6285df982d94\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.651312 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-dns-svc\") pod \"2b9035e4-51ae-4ed9-a708-6285df982d94\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.651540 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-config\") pod \"2b9035e4-51ae-4ed9-a708-6285df982d94\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.651654 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhkwv\" (UniqueName: \"kubernetes.io/projected/2b9035e4-51ae-4ed9-a708-6285df982d94-kube-api-access-jhkwv\") pod \"2b9035e4-51ae-4ed9-a708-6285df982d94\" (UID: \"2b9035e4-51ae-4ed9-a708-6285df982d94\") " Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.657127 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9035e4-51ae-4ed9-a708-6285df982d94-kube-api-access-jhkwv" (OuterVolumeSpecName: "kube-api-access-jhkwv") pod "2b9035e4-51ae-4ed9-a708-6285df982d94" (UID: "2b9035e4-51ae-4ed9-a708-6285df982d94"). InnerVolumeSpecName "kube-api-access-jhkwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.704186 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b9035e4-51ae-4ed9-a708-6285df982d94" (UID: "2b9035e4-51ae-4ed9-a708-6285df982d94"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.704363 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-config" (OuterVolumeSpecName: "config") pod "2b9035e4-51ae-4ed9-a708-6285df982d94" (UID: "2b9035e4-51ae-4ed9-a708-6285df982d94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.704453 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b9035e4-51ae-4ed9-a708-6285df982d94" (UID: "2b9035e4-51ae-4ed9-a708-6285df982d94"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.753648 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.753850 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.753914 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b9035e4-51ae-4ed9-a708-6285df982d94-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.753969 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhkwv\" (UniqueName: \"kubernetes.io/projected/2b9035e4-51ae-4ed9-a708-6285df982d94-kube-api-access-jhkwv\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:17 crc kubenswrapper[4874]: I0217 16:23:17.873120 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.254605 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" event={"ID":"2b9035e4-51ae-4ed9-a708-6285df982d94","Type":"ContainerDied","Data":"84071db6f66565ce0ecb4cdf3b7942f0eb7ab1fd1af9dc0c4ed3c1a097a0fffd"} Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.254964 4874 scope.go:117] "RemoveContainer" containerID="e2739fc473179afe6350a31fd1025e8c367aaa111086ce1e346c86e8c71b9d8e" Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.254630 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.308314 4874 scope.go:117] "RemoveContainer" containerID="1c6d5a9130ccdc551a2e87c92225e0674e76386ad38bd7e55dd62590207be40a" Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.472481 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:18 crc kubenswrapper[4874]: E0217 16:23:18.472664 4874 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:23:18 crc kubenswrapper[4874]: E0217 16:23:18.473520 4874 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:23:18 crc kubenswrapper[4874]: E0217 16:23:18.473602 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift podName:7fda3013-2526-48c1-ba34-9e8d1bb33e9f nodeName:}" failed. No retries permitted until 2026-02-17 16:23:26.473580423 +0000 UTC m=+1216.767968994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift") pod "swift-storage-0" (UID: "7fda3013-2526-48c1-ba34-9e8d1bb33e9f") : configmap "swift-ring-files" not found Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.485358 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-864zn"] Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.495191 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-864zn"] Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.506055 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vj2t6"] Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.873049 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-q8x4r"] Feb 17 16:23:18 crc kubenswrapper[4874]: W0217 16:23:18.883698 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf707444_663f_458c_a1a2_88d51f97bc68.slice/crio-f1de3c5382a5c00695e10c67a29a7d4c44234cac83b20ac9d282f46000f75f40 WatchSource:0}: Error finding container f1de3c5382a5c00695e10c67a29a7d4c44234cac83b20ac9d282f46000f75f40: Status 404 returned error can't find the container with id f1de3c5382a5c00695e10c67a29a7d4c44234cac83b20ac9d282f46000f75f40 Feb 17 16:23:18 crc kubenswrapper[4874]: W0217 16:23:18.884329 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c54a6b1_bb00_46fc_91bf_d0c312daceb6.slice/crio-f0736dfd253dda25c71401e065a1e00eabca45b2ccb2a5fe46699a62b3f6b256 WatchSource:0}: Error finding container f0736dfd253dda25c71401e065a1e00eabca45b2ccb2a5fe46699a62b3f6b256: Status 404 returned error can't find the container with id f0736dfd253dda25c71401e065a1e00eabca45b2ccb2a5fe46699a62b3f6b256 Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.891598 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.904602 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1f5a-account-create-update-c9qms"] Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.914756 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-spnx4"] Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.926513 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6clxd"] Feb 17 16:23:18 crc kubenswrapper[4874]: I0217 16:23:18.951192 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gztng"] Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.011644 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-fcfd-account-create-update-gzpln"] Feb 17 16:23:19 crc kubenswrapper[4874]: W0217 16:23:19.040511 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b0a8f96_f93d_4a9f_b191_76cfd2cab069.slice/crio-8119e986fdff869aa5132fdc122c1c9cf02b5a03f7c2975e55b327291ef315d6 WatchSource:0}: Error finding container 8119e986fdff869aa5132fdc122c1c9cf02b5a03f7c2975e55b327291ef315d6: Status 404 returned error can't find the container with id 8119e986fdff869aa5132fdc122c1c9cf02b5a03f7c2975e55b327291ef315d6 Feb 17 16:23:19 crc kubenswrapper[4874]: W0217 16:23:19.175832 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39726753_57c2_4de7_91a2_c0f60e799ea9.slice/crio-2ea556569f104b6a148e15171e90bb7ca48d386147712fed9d40bd5b1cfe9ad5 WatchSource:0}: Error finding container 2ea556569f104b6a148e15171e90bb7ca48d386147712fed9d40bd5b1cfe9ad5: Status 404 returned error can't find the container with id 2ea556569f104b6a148e15171e90bb7ca48d386147712fed9d40bd5b1cfe9ad5 Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.275627 4874 generic.go:334] "Generic (PLEG): container finished" podID="16a736ae-9a4f-4803-ade8-2088a03e9b75" containerID="84ad71ec35dbad08e18144c57a199d66fd5d9782db30b48d4bd139abf332c2e8" exitCode=0 Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.275694 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2582-account-create-update-h89p2" event={"ID":"16a736ae-9a4f-4803-ade8-2088a03e9b75","Type":"ContainerDied","Data":"84ad71ec35dbad08e18144c57a199d66fd5d9782db30b48d4bd139abf332c2e8"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.280004 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vj2t6" event={"ID":"99f3c575-721c-4e73-a4e3-e5497e1a3201","Type":"ContainerStarted","Data":"fb5dfe0420d5b0cdcfe6476c132e69c7a6bbbf8fe429db1819298f0e19ea4841"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.292334 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"48dbc25d-e454-452c-9912-f08d7569ecfa","Type":"ContainerStarted","Data":"a4f51a6cead03d842a57464e6dac1ab6605658a79cdb3dad74e191824b105b9c"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.292625 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"48dbc25d-e454-452c-9912-f08d7569ecfa","Type":"ContainerStarted","Data":"bb464c196680c93d7b31e1a3887a24af11ed89bb5cd2b3152c5c577d41f26c84"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.292647 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.294159 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gztng" event={"ID":"5b0a8f96-f93d-4a9f-b191-76cfd2cab069","Type":"ContainerStarted","Data":"8119e986fdff869aa5132fdc122c1c9cf02b5a03f7c2975e55b327291ef315d6"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.300103 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6clxd" event={"ID":"b7c19fd8-c880-4d9e-bd50-aa7748e85aee","Type":"ContainerStarted","Data":"9c3ab5cffa8aba2a13870ee6adcf9a538a636f9f45ba76727dde1d1e3181bab3"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.307874 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerStarted","Data":"1cef8597cb86fe168d32c55daf5eaca9e8af13e478fafbe2f8238b4416a29571"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.309729 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" event={"ID":"3c54a6b1-bb00-46fc-91bf-d0c312daceb6","Type":"ContainerStarted","Data":"f0736dfd253dda25c71401e065a1e00eabca45b2ccb2a5fe46699a62b3f6b256"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.311179 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" event={"ID":"7a138fbf-e69e-4981-a7f0-b399fbbb7088","Type":"ContainerStarted","Data":"f0480689af11697ef46e7bf847d20e0fb6c0b8cf3102d0d5a6689805d032fb99"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.314121 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7jmdw" event={"ID":"63c7af35-e957-4bca-ba65-13b706314f83","Type":"ContainerStarted","Data":"0b1ef60faed10e91ed98fcce4e5fa4cc56f443d0fb697e036d06c4176b109e00"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.314275 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.323531 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.202696589 podStartE2EDuration="13.323511275s" podCreationTimestamp="2026-02-17 16:23:06 +0000 UTC" firstStartedPulling="2026-02-17 16:23:07.789089422 +0000 UTC m=+1198.083477983" lastFinishedPulling="2026-02-17 16:23:17.909904108 +0000 UTC m=+1208.204292669" observedRunningTime="2026-02-17 16:23:19.322995492 +0000 UTC m=+1209.617384053" watchObservedRunningTime="2026-02-17 16:23:19.323511275 +0000 UTC m=+1209.617899826" Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.324823 4874 generic.go:334] "Generic (PLEG): container finished" podID="35de0e21-b2b6-482c-a5b0-01b20b85fd46" containerID="5a8bfe433579d3f0d0e88fcad8e9d7a93f609a885bc31f0679bb59acd5c732f1" exitCode=0 Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.324906 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rthxd" event={"ID":"35de0e21-b2b6-482c-a5b0-01b20b85fd46","Type":"ContainerDied","Data":"5a8bfe433579d3f0d0e88fcad8e9d7a93f609a885bc31f0679bb59acd5c732f1"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.326671 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" event={"ID":"af707444-663f-458c-a1a2-88d51f97bc68","Type":"ContainerStarted","Data":"f1de3c5382a5c00695e10c67a29a7d4c44234cac83b20ac9d282f46000f75f40"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.329688 4874 generic.go:334] "Generic (PLEG): container finished" podID="82e5efee-d739-4300-bc49-181df5481246" containerID="92f8768559c7b71c15ef74c94877335312ccf1bb1da6c22b7ddc22eecd222604" exitCode=0 Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.329733 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5c3-account-create-update-s4xs5" event={"ID":"82e5efee-d739-4300-bc49-181df5481246","Type":"ContainerDied","Data":"92f8768559c7b71c15ef74c94877335312ccf1bb1da6c22b7ddc22eecd222604"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.333368 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fcfd-account-create-update-gzpln" event={"ID":"39726753-57c2-4de7-91a2-c0f60e799ea9","Type":"ContainerStarted","Data":"2ea556569f104b6a148e15171e90bb7ca48d386147712fed9d40bd5b1cfe9ad5"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.335262 4874 generic.go:334] "Generic (PLEG): container finished" podID="77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3" containerID="f80e72c1edae306c4e2bab265d3dc1e5d36967b7a7e5dfbff444f82f8e2e532d" exitCode=0 Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.335284 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4j7m8" event={"ID":"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3","Type":"ContainerDied","Data":"f80e72c1edae306c4e2bab265d3dc1e5d36967b7a7e5dfbff444f82f8e2e532d"} Feb 17 16:23:19 crc kubenswrapper[4874]: I0217 16:23:19.353935 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-7jmdw" podStartSLOduration=13.353915478 podStartE2EDuration="13.353915478s" podCreationTimestamp="2026-02-17 16:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:19.343590232 +0000 UTC m=+1209.637978793" watchObservedRunningTime="2026-02-17 16:23:19.353915478 +0000 UTC m=+1209.648304039" Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.348949 4874 generic.go:334] "Generic (PLEG): container finished" podID="7a138fbf-e69e-4981-a7f0-b399fbbb7088" containerID="907adfd242e9bbfd980d49bc6f8323b6b804ab738b49cb86f0c0b7d937b107b2" exitCode=0 Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.349052 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" event={"ID":"7a138fbf-e69e-4981-a7f0-b399fbbb7088","Type":"ContainerDied","Data":"907adfd242e9bbfd980d49bc6f8323b6b804ab738b49cb86f0c0b7d937b107b2"} Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.352489 4874 generic.go:334] "Generic (PLEG): container finished" podID="5b0a8f96-f93d-4a9f-b191-76cfd2cab069" containerID="586ac36c6fecdd78309669feea2a9977e5bd8b2545742b335015b05fd55c2743" exitCode=0 Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.352560 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gztng" event={"ID":"5b0a8f96-f93d-4a9f-b191-76cfd2cab069","Type":"ContainerDied","Data":"586ac36c6fecdd78309669feea2a9977e5bd8b2545742b335015b05fd55c2743"} Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.354623 4874 generic.go:334] "Generic (PLEG): container finished" podID="b7c19fd8-c880-4d9e-bd50-aa7748e85aee" containerID="774c4325997a9849188504081d763f3e4caee3b24245b2ffa8f4bd92b197c5ff" exitCode=0 Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.354685 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6clxd" event={"ID":"b7c19fd8-c880-4d9e-bd50-aa7748e85aee","Type":"ContainerDied","Data":"774c4325997a9849188504081d763f3e4caee3b24245b2ffa8f4bd92b197c5ff"} Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.357867 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fcfd-account-create-update-gzpln" event={"ID":"39726753-57c2-4de7-91a2-c0f60e799ea9","Type":"ContainerStarted","Data":"df7ebb90d0e00ce7adbcaebfe1d386698aca1cd2fa452d572c0f8e3a98afc8b9"} Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.366422 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" event={"ID":"af707444-663f-458c-a1a2-88d51f97bc68","Type":"ContainerStarted","Data":"1e5bcd2d33916dc7d516910e47eff9c4ab0178227686a16e0ba8ec88827f5fbc"} Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.370401 4874 generic.go:334] "Generic (PLEG): container finished" podID="3c54a6b1-bb00-46fc-91bf-d0c312daceb6" containerID="bbdebe81d80dc65707b1e8398fb957fd6acb87e535a12101f6283d6d4013bd1c" exitCode=0 Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.370463 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" event={"ID":"3c54a6b1-bb00-46fc-91bf-d0c312daceb6","Type":"ContainerDied","Data":"bbdebe81d80dc65707b1e8398fb957fd6acb87e535a12101f6283d6d4013bd1c"} Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.413886 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-fcfd-account-create-update-gzpln" podStartSLOduration=3.4138652179999998 podStartE2EDuration="3.413865218s" podCreationTimestamp="2026-02-17 16:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:20.401779768 +0000 UTC m=+1210.696168349" watchObservedRunningTime="2026-02-17 16:23:20.413865218 +0000 UTC m=+1210.708253789" Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.440472 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" podStartSLOduration=11.440452686 podStartE2EDuration="11.440452686s" podCreationTimestamp="2026-02-17 16:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:20.434695663 +0000 UTC m=+1210.729084224" watchObservedRunningTime="2026-02-17 16:23:20.440452686 +0000 UTC m=+1210.734841247" Feb 17 16:23:20 crc kubenswrapper[4874]: I0217 16:23:20.489942 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9035e4-51ae-4ed9-a708-6285df982d94" path="/var/lib/kubelet/pods/2b9035e4-51ae-4ed9-a708-6285df982d94/volumes" Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.380543 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rthxd" event={"ID":"35de0e21-b2b6-482c-a5b0-01b20b85fd46","Type":"ContainerDied","Data":"fd0235bbb0e904153bd22abdf577fdc31c79ec75e276e346a2657834c11cdc97"} Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.380590 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0235bbb0e904153bd22abdf577fdc31c79ec75e276e346a2657834c11cdc97" Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.383742 4874 generic.go:334] "Generic (PLEG): container finished" podID="af707444-663f-458c-a1a2-88d51f97bc68" containerID="1e5bcd2d33916dc7d516910e47eff9c4ab0178227686a16e0ba8ec88827f5fbc" exitCode=0 Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.383764 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" event={"ID":"af707444-663f-458c-a1a2-88d51f97bc68","Type":"ContainerDied","Data":"1e5bcd2d33916dc7d516910e47eff9c4ab0178227686a16e0ba8ec88827f5fbc"} Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.388926 4874 generic.go:334] "Generic (PLEG): container finished" podID="39726753-57c2-4de7-91a2-c0f60e799ea9" containerID="df7ebb90d0e00ce7adbcaebfe1d386698aca1cd2fa452d572c0f8e3a98afc8b9" exitCode=0 Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.389315 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fcfd-account-create-update-gzpln" event={"ID":"39726753-57c2-4de7-91a2-c0f60e799ea9","Type":"ContainerDied","Data":"df7ebb90d0e00ce7adbcaebfe1d386698aca1cd2fa452d572c0f8e3a98afc8b9"} Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.454302 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.555414 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk54g\" (UniqueName: \"kubernetes.io/projected/35de0e21-b2b6-482c-a5b0-01b20b85fd46-kube-api-access-mk54g\") pod \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\" (UID: \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\") " Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.555690 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35de0e21-b2b6-482c-a5b0-01b20b85fd46-operator-scripts\") pod \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\" (UID: \"35de0e21-b2b6-482c-a5b0-01b20b85fd46\") " Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.556343 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35de0e21-b2b6-482c-a5b0-01b20b85fd46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35de0e21-b2b6-482c-a5b0-01b20b85fd46" (UID: "35de0e21-b2b6-482c-a5b0-01b20b85fd46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.557452 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35de0e21-b2b6-482c-a5b0-01b20b85fd46-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.563564 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35de0e21-b2b6-482c-a5b0-01b20b85fd46-kube-api-access-mk54g" (OuterVolumeSpecName: "kube-api-access-mk54g") pod "35de0e21-b2b6-482c-a5b0-01b20b85fd46" (UID: "35de0e21-b2b6-482c-a5b0-01b20b85fd46"). InnerVolumeSpecName "kube-api-access-mk54g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.631545 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bc7876d45-864zn" podUID="2b9035e4-51ae-4ed9-a708-6285df982d94" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Feb 17 16:23:21 crc kubenswrapper[4874]: I0217 16:23:21.659809 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk54g\" (UniqueName: \"kubernetes.io/projected/35de0e21-b2b6-482c-a5b0-01b20b85fd46-kube-api-access-mk54g\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:22 crc kubenswrapper[4874]: I0217 16:23:22.405679 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rthxd" Feb 17 16:23:22 crc kubenswrapper[4874]: I0217 16:23:22.406743 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerStarted","Data":"c430359f571e0ad45b51112560ba29c1aad849d4b123cba6c2318ce17f34d5a0"} Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.286000 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.325112 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af707444-663f-458c-a1a2-88d51f97bc68-operator-scripts\") pod \"af707444-663f-458c-a1a2-88d51f97bc68\" (UID: \"af707444-663f-458c-a1a2-88d51f97bc68\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.325707 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnzrh\" (UniqueName: \"kubernetes.io/projected/af707444-663f-458c-a1a2-88d51f97bc68-kube-api-access-tnzrh\") pod \"af707444-663f-458c-a1a2-88d51f97bc68\" (UID: \"af707444-663f-458c-a1a2-88d51f97bc68\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.326938 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af707444-663f-458c-a1a2-88d51f97bc68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af707444-663f-458c-a1a2-88d51f97bc68" (UID: "af707444-663f-458c-a1a2-88d51f97bc68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.332852 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af707444-663f-458c-a1a2-88d51f97bc68-kube-api-access-tnzrh" (OuterVolumeSpecName: "kube-api-access-tnzrh") pod "af707444-663f-458c-a1a2-88d51f97bc68" (UID: "af707444-663f-458c-a1a2-88d51f97bc68"). InnerVolumeSpecName "kube-api-access-tnzrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.418910 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-fcfd-account-create-update-gzpln" event={"ID":"39726753-57c2-4de7-91a2-c0f60e799ea9","Type":"ContainerDied","Data":"2ea556569f104b6a148e15171e90bb7ca48d386147712fed9d40bd5b1cfe9ad5"} Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.418960 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ea556569f104b6a148e15171e90bb7ca48d386147712fed9d40bd5b1cfe9ad5" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.421553 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" event={"ID":"af707444-663f-458c-a1a2-88d51f97bc68","Type":"ContainerDied","Data":"f1de3c5382a5c00695e10c67a29a7d4c44234cac83b20ac9d282f46000f75f40"} Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.421591 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1de3c5382a5c00695e10c67a29a7d4c44234cac83b20ac9d282f46000f75f40" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.421662 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1f5a-account-create-update-c9qms" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.424421 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4j7m8" event={"ID":"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3","Type":"ContainerDied","Data":"8030cfb4d3f682da1605af8af08153f522d33d771bd16102a0d95eed8f87da79"} Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.424459 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8030cfb4d3f682da1605af8af08153f522d33d771bd16102a0d95eed8f87da79" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.428557 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnzrh\" (UniqueName: \"kubernetes.io/projected/af707444-663f-458c-a1a2-88d51f97bc68-kube-api-access-tnzrh\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.428584 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af707444-663f-458c-a1a2-88d51f97bc68-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.432028 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" event={"ID":"7a138fbf-e69e-4981-a7f0-b399fbbb7088","Type":"ContainerDied","Data":"f0480689af11697ef46e7bf847d20e0fb6c0b8cf3102d0d5a6689805d032fb99"} Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.432089 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0480689af11697ef46e7bf847d20e0fb6c0b8cf3102d0d5a6689805d032fb99" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.438191 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.440337 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f5c3-account-create-update-s4xs5" event={"ID":"82e5efee-d739-4300-bc49-181df5481246","Type":"ContainerDied","Data":"9b30661c715891a6c07c0b63d086f84f107295091c1dcbbf6fe8b5d2ff5a43a6"} Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.440426 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b30661c715891a6c07c0b63d086f84f107295091c1dcbbf6fe8b5d2ff5a43a6" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.443264 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gztng" event={"ID":"5b0a8f96-f93d-4a9f-b191-76cfd2cab069","Type":"ContainerDied","Data":"8119e986fdff869aa5132fdc122c1c9cf02b5a03f7c2975e55b327291ef315d6"} Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.443310 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8119e986fdff869aa5132fdc122c1c9cf02b5a03f7c2975e55b327291ef315d6" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.448130 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6clxd" event={"ID":"b7c19fd8-c880-4d9e-bd50-aa7748e85aee","Type":"ContainerDied","Data":"9c3ab5cffa8aba2a13870ee6adcf9a538a636f9f45ba76727dde1d1e3181bab3"} Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.448179 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3ab5cffa8aba2a13870ee6adcf9a538a636f9f45ba76727dde1d1e3181bab3" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.449744 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gztng" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.450460 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2582-account-create-update-h89p2" event={"ID":"16a736ae-9a4f-4803-ade8-2088a03e9b75","Type":"ContainerDied","Data":"a7b5ea7002d498700adecf2852cf67ba7212d2f263d5593de2cca7ddf1df26a3"} Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.450572 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b5ea7002d498700adecf2852cf67ba7212d2f263d5593de2cca7ddf1df26a3" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.472865 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.479536 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.504056 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.507365 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.507802 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530458 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlkx4\" (UniqueName: \"kubernetes.io/projected/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-kube-api-access-wlkx4\") pod \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\" (UID: \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530531 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-operator-scripts\") pod \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\" (UID: \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530600 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfdbn\" (UniqueName: \"kubernetes.io/projected/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-kube-api-access-dfdbn\") pod \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\" (UID: \"77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530624 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a736ae-9a4f-4803-ade8-2088a03e9b75-operator-scripts\") pod \"16a736ae-9a4f-4803-ade8-2088a03e9b75\" (UID: \"16a736ae-9a4f-4803-ade8-2088a03e9b75\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530657 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e5efee-d739-4300-bc49-181df5481246-operator-scripts\") pod \"82e5efee-d739-4300-bc49-181df5481246\" (UID: \"82e5efee-d739-4300-bc49-181df5481246\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530681 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39726753-57c2-4de7-91a2-c0f60e799ea9-operator-scripts\") pod \"39726753-57c2-4de7-91a2-c0f60e799ea9\" (UID: \"39726753-57c2-4de7-91a2-c0f60e799ea9\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530719 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vnpt\" (UniqueName: \"kubernetes.io/projected/7a138fbf-e69e-4981-a7f0-b399fbbb7088-kube-api-access-9vnpt\") pod \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\" (UID: \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530741 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a138fbf-e69e-4981-a7f0-b399fbbb7088-operator-scripts\") pod \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\" (UID: \"7a138fbf-e69e-4981-a7f0-b399fbbb7088\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530759 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2626m\" (UniqueName: \"kubernetes.io/projected/39726753-57c2-4de7-91a2-c0f60e799ea9-kube-api-access-2626m\") pod \"39726753-57c2-4de7-91a2-c0f60e799ea9\" (UID: \"39726753-57c2-4de7-91a2-c0f60e799ea9\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530785 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-operator-scripts\") pod \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\" (UID: \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530825 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw4l7\" (UniqueName: \"kubernetes.io/projected/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-kube-api-access-tw4l7\") pod \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\" (UID: \"5b0a8f96-f93d-4a9f-b191-76cfd2cab069\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530853 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-operator-scripts\") pod \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\" (UID: \"b7c19fd8-c880-4d9e-bd50-aa7748e85aee\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530872 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpnhd\" (UniqueName: \"kubernetes.io/projected/82e5efee-d739-4300-bc49-181df5481246-kube-api-access-rpnhd\") pod \"82e5efee-d739-4300-bc49-181df5481246\" (UID: \"82e5efee-d739-4300-bc49-181df5481246\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.530951 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jthl\" (UniqueName: \"kubernetes.io/projected/16a736ae-9a4f-4803-ade8-2088a03e9b75-kube-api-access-8jthl\") pod \"16a736ae-9a4f-4803-ade8-2088a03e9b75\" (UID: \"16a736ae-9a4f-4803-ade8-2088a03e9b75\") " Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.532022 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a138fbf-e69e-4981-a7f0-b399fbbb7088-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a138fbf-e69e-4981-a7f0-b399fbbb7088" (UID: "7a138fbf-e69e-4981-a7f0-b399fbbb7088"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.532957 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b0a8f96-f93d-4a9f-b191-76cfd2cab069" (UID: "5b0a8f96-f93d-4a9f-b191-76cfd2cab069"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.534961 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-kube-api-access-wlkx4" (OuterVolumeSpecName: "kube-api-access-wlkx4") pod "b7c19fd8-c880-4d9e-bd50-aa7748e85aee" (UID: "b7c19fd8-c880-4d9e-bd50-aa7748e85aee"). InnerVolumeSpecName "kube-api-access-wlkx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.535329 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7c19fd8-c880-4d9e-bd50-aa7748e85aee" (UID: "b7c19fd8-c880-4d9e-bd50-aa7748e85aee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.536274 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e5efee-d739-4300-bc49-181df5481246-kube-api-access-rpnhd" (OuterVolumeSpecName: "kube-api-access-rpnhd") pod "82e5efee-d739-4300-bc49-181df5481246" (UID: "82e5efee-d739-4300-bc49-181df5481246"). InnerVolumeSpecName "kube-api-access-rpnhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.536595 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82e5efee-d739-4300-bc49-181df5481246-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82e5efee-d739-4300-bc49-181df5481246" (UID: "82e5efee-d739-4300-bc49-181df5481246"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.537244 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a736ae-9a4f-4803-ade8-2088a03e9b75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16a736ae-9a4f-4803-ade8-2088a03e9b75" (UID: "16a736ae-9a4f-4803-ade8-2088a03e9b75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.537302 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-kube-api-access-tw4l7" (OuterVolumeSpecName: "kube-api-access-tw4l7") pod "5b0a8f96-f93d-4a9f-b191-76cfd2cab069" (UID: "5b0a8f96-f93d-4a9f-b191-76cfd2cab069"). InnerVolumeSpecName "kube-api-access-tw4l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.537378 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39726753-57c2-4de7-91a2-c0f60e799ea9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39726753-57c2-4de7-91a2-c0f60e799ea9" (UID: "39726753-57c2-4de7-91a2-c0f60e799ea9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.537891 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a736ae-9a4f-4803-ade8-2088a03e9b75-kube-api-access-8jthl" (OuterVolumeSpecName: "kube-api-access-8jthl") pod "16a736ae-9a4f-4803-ade8-2088a03e9b75" (UID: "16a736ae-9a4f-4803-ade8-2088a03e9b75"). InnerVolumeSpecName "kube-api-access-8jthl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.538069 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39726753-57c2-4de7-91a2-c0f60e799ea9-kube-api-access-2626m" (OuterVolumeSpecName: "kube-api-access-2626m") pod "39726753-57c2-4de7-91a2-c0f60e799ea9" (UID: "39726753-57c2-4de7-91a2-c0f60e799ea9"). InnerVolumeSpecName "kube-api-access-2626m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.539835 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-kube-api-access-dfdbn" (OuterVolumeSpecName: "kube-api-access-dfdbn") pod "77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3" (UID: "77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3"). InnerVolumeSpecName "kube-api-access-dfdbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.540865 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3" (UID: "77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.554622 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a138fbf-e69e-4981-a7f0-b399fbbb7088-kube-api-access-9vnpt" (OuterVolumeSpecName: "kube-api-access-9vnpt") pod "7a138fbf-e69e-4981-a7f0-b399fbbb7088" (UID: "7a138fbf-e69e-4981-a7f0-b399fbbb7088"). InnerVolumeSpecName "kube-api-access-9vnpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.638899 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlkx4\" (UniqueName: \"kubernetes.io/projected/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-kube-api-access-wlkx4\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.638939 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.638951 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfdbn\" (UniqueName: \"kubernetes.io/projected/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3-kube-api-access-dfdbn\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.638962 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a736ae-9a4f-4803-ade8-2088a03e9b75-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.638975 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82e5efee-d739-4300-bc49-181df5481246-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.638987 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39726753-57c2-4de7-91a2-c0f60e799ea9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.638999 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vnpt\" (UniqueName: \"kubernetes.io/projected/7a138fbf-e69e-4981-a7f0-b399fbbb7088-kube-api-access-9vnpt\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.639011 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a138fbf-e69e-4981-a7f0-b399fbbb7088-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.639022 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2626m\" (UniqueName: \"kubernetes.io/projected/39726753-57c2-4de7-91a2-c0f60e799ea9-kube-api-access-2626m\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.639034 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.639045 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw4l7\" (UniqueName: \"kubernetes.io/projected/5b0a8f96-f93d-4a9f-b191-76cfd2cab069-kube-api-access-tw4l7\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.639056 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c19fd8-c880-4d9e-bd50-aa7748e85aee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.639067 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpnhd\" (UniqueName: \"kubernetes.io/projected/82e5efee-d739-4300-bc49-181df5481246-kube-api-access-rpnhd\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:23 crc kubenswrapper[4874]: I0217 16:23:23.639091 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jthl\" (UniqueName: \"kubernetes.io/projected/16a736ae-9a4f-4803-ade8-2088a03e9b75-kube-api-access-8jthl\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.466137 4874 generic.go:334] "Generic (PLEG): container finished" podID="37707c24-e133-484d-955f-57a20ec147b1" containerID="1eb6dabf17b342d2327164ae121cc80c313bb12e86bc551602ad09c3ceea3b65" exitCode=0 Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.477466 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6clxd" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.477522 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-fcfd-account-create-update-gzpln" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.477579 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f5c3-account-create-update-s4xs5" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.477593 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gztng" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.477637 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4j7m8" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.477650 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-q8x4r" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.477689 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2582-account-create-update-h89p2" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.494308 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" event={"ID":"3c54a6b1-bb00-46fc-91bf-d0c312daceb6","Type":"ContainerStarted","Data":"c07f0c8d037cc6f7f2614b10e937e19f4fd88860801c82fa7eeefb0f40841360"} Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.494352 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37707c24-e133-484d-955f-57a20ec147b1","Type":"ContainerDied","Data":"1eb6dabf17b342d2327164ae121cc80c313bb12e86bc551602ad09c3ceea3b65"} Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.494368 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vj2t6" event={"ID":"99f3c575-721c-4e73-a4e3-e5497e1a3201","Type":"ContainerStarted","Data":"cb9f33794210e4d6a2dc8df37db35079c7c90d6157ca1179d2efa9ff8fe2369f"} Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.525406 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" podStartSLOduration=15.525385485 podStartE2EDuration="15.525385485s" podCreationTimestamp="2026-02-17 16:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:24.489825215 +0000 UTC m=+1214.784213786" watchObservedRunningTime="2026-02-17 16:23:24.525385485 +0000 UTC m=+1214.819774046" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.559118 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vj2t6" podStartSLOduration=8.965134578 podStartE2EDuration="13.559098059s" podCreationTimestamp="2026-02-17 16:23:11 +0000 UTC" firstStartedPulling="2026-02-17 16:23:18.541861764 +0000 UTC m=+1208.836250325" lastFinishedPulling="2026-02-17 16:23:23.135825235 +0000 UTC m=+1213.430213806" observedRunningTime="2026-02-17 16:23:24.51913228 +0000 UTC m=+1214.813520841" watchObservedRunningTime="2026-02-17 16:23:24.559098059 +0000 UTC m=+1214.853486640" Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.793333 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6clxd"] Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.804404 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6clxd"] Feb 17 16:23:24 crc kubenswrapper[4874]: I0217 16:23:24.900858 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.095367 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5c65ff7679-2cmfs" podUID="d7336f40-57d5-4171-98ad-aeee272451ae" containerName="console" containerID="cri-o://9eb33c0d9d7a1d5e1c496608f5c31d21e83eb99d5eeefbfd6cf3bbe554232b6d" gracePeriod=15 Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.480778 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c19fd8-c880-4d9e-bd50-aa7748e85aee" path="/var/lib/kubelet/pods/b7c19fd8-c880-4d9e-bd50-aa7748e85aee/volumes" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.510243 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:26 crc kubenswrapper[4874]: E0217 16:23:26.510666 4874 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:23:26 crc kubenswrapper[4874]: E0217 16:23:26.510686 4874 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:23:26 crc kubenswrapper[4874]: E0217 16:23:26.510742 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift podName:7fda3013-2526-48c1-ba34-9e8d1bb33e9f nodeName:}" failed. No retries permitted until 2026-02-17 16:23:42.510725666 +0000 UTC m=+1232.805114237 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift") pod "swift-storage-0" (UID: "7fda3013-2526-48c1-ba34-9e8d1bb33e9f") : configmap "swift-ring-files" not found Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.513870 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerStarted","Data":"aff618b7665199355e76c6ef7af2ae0dc1943e87544fee3f46a96c1356c5b919"} Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.534251 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37707c24-e133-484d-955f-57a20ec147b1","Type":"ContainerStarted","Data":"5b477107987341f58781f2906066cdd417998032a437d5fd1b340bbada577f70"} Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.534657 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.538455 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c65ff7679-2cmfs_d7336f40-57d5-4171-98ad-aeee272451ae/console/0.log" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.538502 4874 generic.go:334] "Generic (PLEG): container finished" podID="d7336f40-57d5-4171-98ad-aeee272451ae" containerID="9eb33c0d9d7a1d5e1c496608f5c31d21e83eb99d5eeefbfd6cf3bbe554232b6d" exitCode=2 Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.539054 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c65ff7679-2cmfs" event={"ID":"d7336f40-57d5-4171-98ad-aeee272451ae","Type":"ContainerDied","Data":"9eb33c0d9d7a1d5e1c496608f5c31d21e83eb99d5eeefbfd6cf3bbe554232b6d"} Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.550130 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.475225895 podStartE2EDuration="57.550106881s" podCreationTimestamp="2026-02-17 16:22:29 +0000 UTC" firstStartedPulling="2026-02-17 16:22:51.33637751 +0000 UTC m=+1181.630766071" lastFinishedPulling="2026-02-17 16:23:25.411258476 +0000 UTC m=+1215.705647057" observedRunningTime="2026-02-17 16:23:26.534836193 +0000 UTC m=+1216.829224764" watchObservedRunningTime="2026-02-17 16:23:26.550106881 +0000 UTC m=+1216.844495462" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.575544 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=38.563006799 podStartE2EDuration="1m4.57552692s" podCreationTimestamp="2026-02-17 16:22:22 +0000 UTC" firstStartedPulling="2026-02-17 16:22:24.508027512 +0000 UTC m=+1154.802416063" lastFinishedPulling="2026-02-17 16:22:50.520547573 +0000 UTC m=+1180.814936184" observedRunningTime="2026-02-17 16:23:26.570739551 +0000 UTC m=+1216.865128122" watchObservedRunningTime="2026-02-17 16:23:26.57552692 +0000 UTC m=+1216.869915481" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.665675 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c65ff7679-2cmfs_d7336f40-57d5-4171-98ad-aeee272451ae/console/0.log" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.665738 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.819996 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-serving-cert\") pod \"d7336f40-57d5-4171-98ad-aeee272451ae\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.820060 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-console-config\") pod \"d7336f40-57d5-4171-98ad-aeee272451ae\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.821284 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-console-config" (OuterVolumeSpecName: "console-config") pod "d7336f40-57d5-4171-98ad-aeee272451ae" (UID: "d7336f40-57d5-4171-98ad-aeee272451ae"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.828824 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-service-ca\") pod \"d7336f40-57d5-4171-98ad-aeee272451ae\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.828927 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg5w9\" (UniqueName: \"kubernetes.io/projected/d7336f40-57d5-4171-98ad-aeee272451ae-kube-api-access-vg5w9\") pod \"d7336f40-57d5-4171-98ad-aeee272451ae\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.829149 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-trusted-ca-bundle\") pod \"d7336f40-57d5-4171-98ad-aeee272451ae\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.829348 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7336f40-57d5-4171-98ad-aeee272451ae" (UID: "d7336f40-57d5-4171-98ad-aeee272451ae"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.829623 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d7336f40-57d5-4171-98ad-aeee272451ae" (UID: "d7336f40-57d5-4171-98ad-aeee272451ae"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.829682 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-oauth-serving-cert\") pod \"d7336f40-57d5-4171-98ad-aeee272451ae\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.829781 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-oauth-config\") pod \"d7336f40-57d5-4171-98ad-aeee272451ae\" (UID: \"d7336f40-57d5-4171-98ad-aeee272451ae\") " Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.830688 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.830712 4874 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.830723 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.831620 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d7336f40-57d5-4171-98ad-aeee272451ae" (UID: "d7336f40-57d5-4171-98ad-aeee272451ae"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.840888 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d7336f40-57d5-4171-98ad-aeee272451ae" (UID: "d7336f40-57d5-4171-98ad-aeee272451ae"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.843217 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7336f40-57d5-4171-98ad-aeee272451ae-kube-api-access-vg5w9" (OuterVolumeSpecName: "kube-api-access-vg5w9") pod "d7336f40-57d5-4171-98ad-aeee272451ae" (UID: "d7336f40-57d5-4171-98ad-aeee272451ae"). InnerVolumeSpecName "kube-api-access-vg5w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.853100 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d7336f40-57d5-4171-98ad-aeee272451ae" (UID: "d7336f40-57d5-4171-98ad-aeee272451ae"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.933701 4874 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.933737 4874 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7336f40-57d5-4171-98ad-aeee272451ae-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.933753 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg5w9\" (UniqueName: \"kubernetes.io/projected/d7336f40-57d5-4171-98ad-aeee272451ae-kube-api-access-vg5w9\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:26 crc kubenswrapper[4874]: I0217 16:23:26.933767 4874 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7336f40-57d5-4171-98ad-aeee272451ae-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.197213 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262164 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-x2mrg"] Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262551 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0a8f96-f93d-4a9f-b191-76cfd2cab069" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262566 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0a8f96-f93d-4a9f-b191-76cfd2cab069" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262578 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a138fbf-e69e-4981-a7f0-b399fbbb7088" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262584 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a138fbf-e69e-4981-a7f0-b399fbbb7088" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262599 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9035e4-51ae-4ed9-a708-6285df982d94" containerName="init" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262606 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9035e4-51ae-4ed9-a708-6285df982d94" containerName="init" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262620 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5efee-d739-4300-bc49-181df5481246" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262625 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5efee-d739-4300-bc49-181df5481246" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262636 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262641 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262648 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c19fd8-c880-4d9e-bd50-aa7748e85aee" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262654 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c19fd8-c880-4d9e-bd50-aa7748e85aee" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262665 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af707444-663f-458c-a1a2-88d51f97bc68" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262670 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="af707444-663f-458c-a1a2-88d51f97bc68" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262679 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35de0e21-b2b6-482c-a5b0-01b20b85fd46" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262685 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="35de0e21-b2b6-482c-a5b0-01b20b85fd46" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262694 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39726753-57c2-4de7-91a2-c0f60e799ea9" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262700 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="39726753-57c2-4de7-91a2-c0f60e799ea9" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262719 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7336f40-57d5-4171-98ad-aeee272451ae" containerName="console" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262724 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7336f40-57d5-4171-98ad-aeee272451ae" containerName="console" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262735 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9035e4-51ae-4ed9-a708-6285df982d94" containerName="dnsmasq-dns" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262740 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9035e4-51ae-4ed9-a708-6285df982d94" containerName="dnsmasq-dns" Feb 17 16:23:27 crc kubenswrapper[4874]: E0217 16:23:27.262751 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a736ae-9a4f-4803-ade8-2088a03e9b75" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262756 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a736ae-9a4f-4803-ade8-2088a03e9b75" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262939 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a138fbf-e69e-4981-a7f0-b399fbbb7088" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262948 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="af707444-663f-458c-a1a2-88d51f97bc68" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262960 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5efee-d739-4300-bc49-181df5481246" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262969 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="39726753-57c2-4de7-91a2-c0f60e799ea9" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262983 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262990 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7336f40-57d5-4171-98ad-aeee272451ae" containerName="console" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.262998 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a736ae-9a4f-4803-ade8-2088a03e9b75" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.263009 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="35de0e21-b2b6-482c-a5b0-01b20b85fd46" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.263020 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0a8f96-f93d-4a9f-b191-76cfd2cab069" containerName="mariadb-database-create" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.263031 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c19fd8-c880-4d9e-bd50-aa7748e85aee" containerName="mariadb-account-create-update" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.263039 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9035e4-51ae-4ed9-a708-6285df982d94" containerName="dnsmasq-dns" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.263671 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.267487 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8j7k" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.267626 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.284531 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x2mrg"] Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.340824 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjs4b\" (UniqueName: \"kubernetes.io/projected/f6c4fb02-268b-4640-9a46-1f107a1fcc28-kube-api-access-hjs4b\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.340893 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-config-data\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.340933 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-db-sync-config-data\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.340997 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-combined-ca-bundle\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.442689 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjs4b\" (UniqueName: \"kubernetes.io/projected/f6c4fb02-268b-4640-9a46-1f107a1fcc28-kube-api-access-hjs4b\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.442747 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-config-data\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.442787 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-db-sync-config-data\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.442840 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-combined-ca-bundle\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.446995 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-db-sync-config-data\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.447562 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-combined-ca-bundle\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.448645 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-config-data\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.465392 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjs4b\" (UniqueName: \"kubernetes.io/projected/f6c4fb02-268b-4640-9a46-1f107a1fcc28-kube-api-access-hjs4b\") pod \"glance-db-sync-x2mrg\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.547676 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c65ff7679-2cmfs_d7336f40-57d5-4171-98ad-aeee272451ae/console/0.log" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.547788 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c65ff7679-2cmfs" event={"ID":"d7336f40-57d5-4171-98ad-aeee272451ae","Type":"ContainerDied","Data":"0c15f7def32914b5ccaf617330241249ba391f1417f997cd05e637b68d2ee2a7"} Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.547856 4874 scope.go:117] "RemoveContainer" containerID="9eb33c0d9d7a1d5e1c496608f5c31d21e83eb99d5eeefbfd6cf3bbe554232b6d" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.547870 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c65ff7679-2cmfs" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.584273 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c65ff7679-2cmfs"] Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.590866 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.594573 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5c65ff7679-2cmfs"] Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.724510 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:23:27 crc kubenswrapper[4874]: I0217 16:23:27.724574 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.250525 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x2mrg"] Feb 17 16:23:28 crc kubenswrapper[4874]: W0217 16:23:28.257543 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6c4fb02_268b_4640_9a46_1f107a1fcc28.slice/crio-fcd8935ce414ea5b268ec79fd72e286fad9a602f5c7aaf1ec2030feaf69e266e WatchSource:0}: Error finding container fcd8935ce414ea5b268ec79fd72e286fad9a602f5c7aaf1ec2030feaf69e266e: Status 404 returned error can't find the container with id fcd8935ce414ea5b268ec79fd72e286fad9a602f5c7aaf1ec2030feaf69e266e Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.472319 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7336f40-57d5-4171-98ad-aeee272451ae" path="/var/lib/kubelet/pods/d7336f40-57d5-4171-98ad-aeee272451ae/volumes" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.473379 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pqzbd"] Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.474976 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.478626 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.480655 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pqzbd"] Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.557487 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x2mrg" event={"ID":"f6c4fb02-268b-4640-9a46-1f107a1fcc28","Type":"ContainerStarted","Data":"fcd8935ce414ea5b268ec79fd72e286fad9a602f5c7aaf1ec2030feaf69e266e"} Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.565521 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bttpw\" (UniqueName: \"kubernetes.io/projected/c4929008-5bb8-4852-8c87-fa3203602206-kube-api-access-bttpw\") pod \"root-account-create-update-pqzbd\" (UID: \"c4929008-5bb8-4852-8c87-fa3203602206\") " pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.565948 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4929008-5bb8-4852-8c87-fa3203602206-operator-scripts\") pod \"root-account-create-update-pqzbd\" (UID: \"c4929008-5bb8-4852-8c87-fa3203602206\") " pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.668344 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bttpw\" (UniqueName: \"kubernetes.io/projected/c4929008-5bb8-4852-8c87-fa3203602206-kube-api-access-bttpw\") pod \"root-account-create-update-pqzbd\" (UID: \"c4929008-5bb8-4852-8c87-fa3203602206\") " pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.668563 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4929008-5bb8-4852-8c87-fa3203602206-operator-scripts\") pod \"root-account-create-update-pqzbd\" (UID: \"c4929008-5bb8-4852-8c87-fa3203602206\") " pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.669595 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4929008-5bb8-4852-8c87-fa3203602206-operator-scripts\") pod \"root-account-create-update-pqzbd\" (UID: \"c4929008-5bb8-4852-8c87-fa3203602206\") " pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.692026 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bttpw\" (UniqueName: \"kubernetes.io/projected/c4929008-5bb8-4852-8c87-fa3203602206-kube-api-access-bttpw\") pod \"root-account-create-update-pqzbd\" (UID: \"c4929008-5bb8-4852-8c87-fa3203602206\") " pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:28 crc kubenswrapper[4874]: I0217 16:23:28.809618 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.323770 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pqzbd"] Feb 17 16:23:29 crc kubenswrapper[4874]: W0217 16:23:29.324661 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4929008_5bb8_4852_8c87_fa3203602206.slice/crio-41fd0770f6be3498f7e1631a8a8f4b63b12a275e702cc0a2e158ec88a14a9ad1 WatchSource:0}: Error finding container 41fd0770f6be3498f7e1631a8a8f4b63b12a275e702cc0a2e158ec88a14a9ad1: Status 404 returned error can't find the container with id 41fd0770f6be3498f7e1631a8a8f4b63b12a275e702cc0a2e158ec88a14a9ad1 Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.567462 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-phcqn"] Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.579847 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pqzbd" event={"ID":"c4929008-5bb8-4852-8c87-fa3203602206","Type":"ContainerStarted","Data":"72aac92bfbd26bb9c5db0d9d70ad1f79ac31b3e8ef267357ee900a7e75f478c8"} Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.579891 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pqzbd" event={"ID":"c4929008-5bb8-4852-8c87-fa3203602206","Type":"ContainerStarted","Data":"41fd0770f6be3498f7e1631a8a8f4b63b12a275e702cc0a2e158ec88a14a9ad1"} Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.579967 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.585869 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-phcqn"] Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.620277 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-pqzbd" podStartSLOduration=1.620262716 podStartE2EDuration="1.620262716s" podCreationTimestamp="2026-02-17 16:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:29.6034417 +0000 UTC m=+1219.897830261" watchObservedRunningTime="2026-02-17 16:23:29.620262716 +0000 UTC m=+1219.914651277" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.690389 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhk64\" (UniqueName: \"kubernetes.io/projected/70b30652-2359-4b06-91c4-a4a590c2fd6c-kube-api-access-rhk64\") pod \"mysqld-exporter-openstack-cell1-db-create-phcqn\" (UID: \"70b30652-2359-4b06-91c4-a4a590c2fd6c\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.690435 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b30652-2359-4b06-91c4-a4a590c2fd6c-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-phcqn\" (UID: \"70b30652-2359-4b06-91c4-a4a590c2fd6c\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.760848 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-d820-account-create-update-fs9ms"] Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.762341 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.766257 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.774334 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d820-account-create-update-fs9ms"] Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.794070 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhk64\" (UniqueName: \"kubernetes.io/projected/70b30652-2359-4b06-91c4-a4a590c2fd6c-kube-api-access-rhk64\") pod \"mysqld-exporter-openstack-cell1-db-create-phcqn\" (UID: \"70b30652-2359-4b06-91c4-a4a590c2fd6c\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.794132 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b30652-2359-4b06-91c4-a4a590c2fd6c-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-phcqn\" (UID: \"70b30652-2359-4b06-91c4-a4a590c2fd6c\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.795186 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b30652-2359-4b06-91c4-a4a590c2fd6c-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-phcqn\" (UID: \"70b30652-2359-4b06-91c4-a4a590c2fd6c\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.822860 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhk64\" (UniqueName: \"kubernetes.io/projected/70b30652-2359-4b06-91c4-a4a590c2fd6c-kube-api-access-rhk64\") pod \"mysqld-exporter-openstack-cell1-db-create-phcqn\" (UID: \"70b30652-2359-4b06-91c4-a4a590c2fd6c\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.896270 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x8bp\" (UniqueName: \"kubernetes.io/projected/3dd992b7-793b-46be-a708-72097bb298cf-kube-api-access-9x8bp\") pod \"mysqld-exporter-d820-account-create-update-fs9ms\" (UID: \"3dd992b7-793b-46be-a708-72097bb298cf\") " pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.896722 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd992b7-793b-46be-a708-72097bb298cf-operator-scripts\") pod \"mysqld-exporter-d820-account-create-update-fs9ms\" (UID: \"3dd992b7-793b-46be-a708-72097bb298cf\") " pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.898314 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.902205 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.995530 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7jmdw"] Feb 17 16:23:29 crc kubenswrapper[4874]: I0217 16:23:29.995756 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-7jmdw" podUID="63c7af35-e957-4bca-ba65-13b706314f83" containerName="dnsmasq-dns" containerID="cri-o://0b1ef60faed10e91ed98fcce4e5fa4cc56f443d0fb697e036d06c4176b109e00" gracePeriod=10 Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:29.999443 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd992b7-793b-46be-a708-72097bb298cf-operator-scripts\") pod \"mysqld-exporter-d820-account-create-update-fs9ms\" (UID: \"3dd992b7-793b-46be-a708-72097bb298cf\") " pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:29.999556 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x8bp\" (UniqueName: \"kubernetes.io/projected/3dd992b7-793b-46be-a708-72097bb298cf-kube-api-access-9x8bp\") pod \"mysqld-exporter-d820-account-create-update-fs9ms\" (UID: \"3dd992b7-793b-46be-a708-72097bb298cf\") " pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.003452 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd992b7-793b-46be-a708-72097bb298cf-operator-scripts\") pod \"mysqld-exporter-d820-account-create-update-fs9ms\" (UID: \"3dd992b7-793b-46be-a708-72097bb298cf\") " pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.039014 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x8bp\" (UniqueName: \"kubernetes.io/projected/3dd992b7-793b-46be-a708-72097bb298cf-kube-api-access-9x8bp\") pod \"mysqld-exporter-d820-account-create-update-fs9ms\" (UID: \"3dd992b7-793b-46be-a708-72097bb298cf\") " pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.090176 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.499236 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-phcqn"] Feb 17 16:23:30 crc kubenswrapper[4874]: W0217 16:23:30.499762 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70b30652_2359_4b06_91c4_a4a590c2fd6c.slice/crio-777694d459fafcc00ec8f81ad118b5503f90df1220e03c14878c2c836c1054dd WatchSource:0}: Error finding container 777694d459fafcc00ec8f81ad118b5503f90df1220e03c14878c2c836c1054dd: Status 404 returned error can't find the container with id 777694d459fafcc00ec8f81ad118b5503f90df1220e03c14878c2c836c1054dd Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.608690 4874 generic.go:334] "Generic (PLEG): container finished" podID="c4929008-5bb8-4852-8c87-fa3203602206" containerID="72aac92bfbd26bb9c5db0d9d70ad1f79ac31b3e8ef267357ee900a7e75f478c8" exitCode=0 Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.608749 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pqzbd" event={"ID":"c4929008-5bb8-4852-8c87-fa3203602206","Type":"ContainerDied","Data":"72aac92bfbd26bb9c5db0d9d70ad1f79ac31b3e8ef267357ee900a7e75f478c8"} Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.617258 4874 generic.go:334] "Generic (PLEG): container finished" podID="63c7af35-e957-4bca-ba65-13b706314f83" containerID="0b1ef60faed10e91ed98fcce4e5fa4cc56f443d0fb697e036d06c4176b109e00" exitCode=0 Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.617329 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7jmdw" event={"ID":"63c7af35-e957-4bca-ba65-13b706314f83","Type":"ContainerDied","Data":"0b1ef60faed10e91ed98fcce4e5fa4cc56f443d0fb697e036d06c4176b109e00"} Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.620366 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" event={"ID":"70b30652-2359-4b06-91c4-a4a590c2fd6c","Type":"ContainerStarted","Data":"777694d459fafcc00ec8f81ad118b5503f90df1220e03c14878c2c836c1054dd"} Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.672435 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d820-account-create-update-fs9ms"] Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.722260 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.722305 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.725574 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.791845 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.929887 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-sb\") pod \"63c7af35-e957-4bca-ba65-13b706314f83\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.930767 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-config\") pod \"63c7af35-e957-4bca-ba65-13b706314f83\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.930817 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-dns-svc\") pod \"63c7af35-e957-4bca-ba65-13b706314f83\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.930878 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvwj\" (UniqueName: \"kubernetes.io/projected/63c7af35-e957-4bca-ba65-13b706314f83-kube-api-access-hqvwj\") pod \"63c7af35-e957-4bca-ba65-13b706314f83\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.930932 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-nb\") pod \"63c7af35-e957-4bca-ba65-13b706314f83\" (UID: \"63c7af35-e957-4bca-ba65-13b706314f83\") " Feb 17 16:23:30 crc kubenswrapper[4874]: I0217 16:23:30.942086 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c7af35-e957-4bca-ba65-13b706314f83-kube-api-access-hqvwj" (OuterVolumeSpecName: "kube-api-access-hqvwj") pod "63c7af35-e957-4bca-ba65-13b706314f83" (UID: "63c7af35-e957-4bca-ba65-13b706314f83"). InnerVolumeSpecName "kube-api-access-hqvwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.007712 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-config" (OuterVolumeSpecName: "config") pod "63c7af35-e957-4bca-ba65-13b706314f83" (UID: "63c7af35-e957-4bca-ba65-13b706314f83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.011547 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63c7af35-e957-4bca-ba65-13b706314f83" (UID: "63c7af35-e957-4bca-ba65-13b706314f83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.012099 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63c7af35-e957-4bca-ba65-13b706314f83" (UID: "63c7af35-e957-4bca-ba65-13b706314f83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.026925 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63c7af35-e957-4bca-ba65-13b706314f83" (UID: "63c7af35-e957-4bca-ba65-13b706314f83"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.033810 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.033842 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.033851 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.033862 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvwj\" (UniqueName: \"kubernetes.io/projected/63c7af35-e957-4bca-ba65-13b706314f83-kube-api-access-hqvwj\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.033871 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c7af35-e957-4bca-ba65-13b706314f83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.629502 4874 generic.go:334] "Generic (PLEG): container finished" podID="70b30652-2359-4b06-91c4-a4a590c2fd6c" containerID="d845dbc1acffaa487d301a4d9ae1f43fd15907e97218ee99d09ac2f04e4560ce" exitCode=0 Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.629572 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" event={"ID":"70b30652-2359-4b06-91c4-a4a590c2fd6c","Type":"ContainerDied","Data":"d845dbc1acffaa487d301a4d9ae1f43fd15907e97218ee99d09ac2f04e4560ce"} Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.631982 4874 generic.go:334] "Generic (PLEG): container finished" podID="99f3c575-721c-4e73-a4e3-e5497e1a3201" containerID="cb9f33794210e4d6a2dc8df37db35079c7c90d6157ca1179d2efa9ff8fe2369f" exitCode=0 Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.632103 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vj2t6" event={"ID":"99f3c575-721c-4e73-a4e3-e5497e1a3201","Type":"ContainerDied","Data":"cb9f33794210e4d6a2dc8df37db35079c7c90d6157ca1179d2efa9ff8fe2369f"} Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.633660 4874 generic.go:334] "Generic (PLEG): container finished" podID="3dd992b7-793b-46be-a708-72097bb298cf" containerID="8d251b96f1f886aaf1ed2fdb94540a43a18e27a9d3d3d099d2633b5a18af12bd" exitCode=0 Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.633722 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" event={"ID":"3dd992b7-793b-46be-a708-72097bb298cf","Type":"ContainerDied","Data":"8d251b96f1f886aaf1ed2fdb94540a43a18e27a9d3d3d099d2633b5a18af12bd"} Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.633743 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" event={"ID":"3dd992b7-793b-46be-a708-72097bb298cf","Type":"ContainerStarted","Data":"735b25ecfdbdf77281eb99a6f517af7c590002f7d34a22e2dea205872ee50723"} Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.636319 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7jmdw" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.645866 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7jmdw" event={"ID":"63c7af35-e957-4bca-ba65-13b706314f83","Type":"ContainerDied","Data":"4c21e04a816d194f5eed42637c3d63588a06ac3d017a7693d836828ce8d64fee"} Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.645896 4874 scope.go:117] "RemoveContainer" containerID="0b1ef60faed10e91ed98fcce4e5fa4cc56f443d0fb697e036d06c4176b109e00" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.646817 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.674850 4874 scope.go:117] "RemoveContainer" containerID="8b759575b27b11a6c1e889a7929655c2edfd58ea36e0445dc822705e16690887" Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.781554 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7jmdw"] Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.795207 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7jmdw"] Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.802379 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tpgc2" podUID="4132e8e3-7498-4df0-9d6d-2dd7c096218a" containerName="ovn-controller" probeResult="failure" output=< Feb 17 16:23:31 crc kubenswrapper[4874]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 16:23:31 crc kubenswrapper[4874]: > Feb 17 16:23:31 crc kubenswrapper[4874]: I0217 16:23:31.880042 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.131063 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.260039 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4929008-5bb8-4852-8c87-fa3203602206-operator-scripts\") pod \"c4929008-5bb8-4852-8c87-fa3203602206\" (UID: \"c4929008-5bb8-4852-8c87-fa3203602206\") " Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.260162 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bttpw\" (UniqueName: \"kubernetes.io/projected/c4929008-5bb8-4852-8c87-fa3203602206-kube-api-access-bttpw\") pod \"c4929008-5bb8-4852-8c87-fa3203602206\" (UID: \"c4929008-5bb8-4852-8c87-fa3203602206\") " Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.260691 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4929008-5bb8-4852-8c87-fa3203602206-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4929008-5bb8-4852-8c87-fa3203602206" (UID: "c4929008-5bb8-4852-8c87-fa3203602206"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.265835 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4929008-5bb8-4852-8c87-fa3203602206-kube-api-access-bttpw" (OuterVolumeSpecName: "kube-api-access-bttpw") pod "c4929008-5bb8-4852-8c87-fa3203602206" (UID: "c4929008-5bb8-4852-8c87-fa3203602206"). InnerVolumeSpecName "kube-api-access-bttpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.364422 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4929008-5bb8-4852-8c87-fa3203602206-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.364450 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bttpw\" (UniqueName: \"kubernetes.io/projected/c4929008-5bb8-4852-8c87-fa3203602206-kube-api-access-bttpw\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.467539 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c7af35-e957-4bca-ba65-13b706314f83" path="/var/lib/kubelet/pods/63c7af35-e957-4bca-ba65-13b706314f83/volumes" Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.658301 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pqzbd" event={"ID":"c4929008-5bb8-4852-8c87-fa3203602206","Type":"ContainerDied","Data":"41fd0770f6be3498f7e1631a8a8f4b63b12a275e702cc0a2e158ec88a14a9ad1"} Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.658341 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41fd0770f6be3498f7e1631a8a8f4b63b12a275e702cc0a2e158ec88a14a9ad1" Feb 17 16:23:32 crc kubenswrapper[4874]: I0217 16:23:32.658380 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pqzbd" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.194489 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.286536 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd992b7-793b-46be-a708-72097bb298cf-operator-scripts\") pod \"3dd992b7-793b-46be-a708-72097bb298cf\" (UID: \"3dd992b7-793b-46be-a708-72097bb298cf\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.286891 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x8bp\" (UniqueName: \"kubernetes.io/projected/3dd992b7-793b-46be-a708-72097bb298cf-kube-api-access-9x8bp\") pod \"3dd992b7-793b-46be-a708-72097bb298cf\" (UID: \"3dd992b7-793b-46be-a708-72097bb298cf\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.288597 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dd992b7-793b-46be-a708-72097bb298cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dd992b7-793b-46be-a708-72097bb298cf" (UID: "3dd992b7-793b-46be-a708-72097bb298cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.293514 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd992b7-793b-46be-a708-72097bb298cf-kube-api-access-9x8bp" (OuterVolumeSpecName: "kube-api-access-9x8bp") pod "3dd992b7-793b-46be-a708-72097bb298cf" (UID: "3dd992b7-793b-46be-a708-72097bb298cf"). InnerVolumeSpecName "kube-api-access-9x8bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.353326 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.359109 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404049 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-combined-ca-bundle\") pod \"99f3c575-721c-4e73-a4e3-e5497e1a3201\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404242 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-dispersionconf\") pod \"99f3c575-721c-4e73-a4e3-e5497e1a3201\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404321 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-swiftconf\") pod \"99f3c575-721c-4e73-a4e3-e5497e1a3201\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404354 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-scripts\") pod \"99f3c575-721c-4e73-a4e3-e5497e1a3201\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404401 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b30652-2359-4b06-91c4-a4a590c2fd6c-operator-scripts\") pod \"70b30652-2359-4b06-91c4-a4a590c2fd6c\" (UID: \"70b30652-2359-4b06-91c4-a4a590c2fd6c\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404456 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99f3c575-721c-4e73-a4e3-e5497e1a3201-etc-swift\") pod \"99f3c575-721c-4e73-a4e3-e5497e1a3201\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404478 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhk64\" (UniqueName: \"kubernetes.io/projected/70b30652-2359-4b06-91c4-a4a590c2fd6c-kube-api-access-rhk64\") pod \"70b30652-2359-4b06-91c4-a4a590c2fd6c\" (UID: \"70b30652-2359-4b06-91c4-a4a590c2fd6c\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404520 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmww6\" (UniqueName: \"kubernetes.io/projected/99f3c575-721c-4e73-a4e3-e5497e1a3201-kube-api-access-zmww6\") pod \"99f3c575-721c-4e73-a4e3-e5497e1a3201\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404566 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-ring-data-devices\") pod \"99f3c575-721c-4e73-a4e3-e5497e1a3201\" (UID: \"99f3c575-721c-4e73-a4e3-e5497e1a3201\") " Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.404989 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x8bp\" (UniqueName: \"kubernetes.io/projected/3dd992b7-793b-46be-a708-72097bb298cf-kube-api-access-9x8bp\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.405012 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dd992b7-793b-46be-a708-72097bb298cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.408671 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "99f3c575-721c-4e73-a4e3-e5497e1a3201" (UID: "99f3c575-721c-4e73-a4e3-e5497e1a3201"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.408982 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f3c575-721c-4e73-a4e3-e5497e1a3201-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "99f3c575-721c-4e73-a4e3-e5497e1a3201" (UID: "99f3c575-721c-4e73-a4e3-e5497e1a3201"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.409896 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70b30652-2359-4b06-91c4-a4a590c2fd6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70b30652-2359-4b06-91c4-a4a590c2fd6c" (UID: "70b30652-2359-4b06-91c4-a4a590c2fd6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.416049 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b30652-2359-4b06-91c4-a4a590c2fd6c-kube-api-access-rhk64" (OuterVolumeSpecName: "kube-api-access-rhk64") pod "70b30652-2359-4b06-91c4-a4a590c2fd6c" (UID: "70b30652-2359-4b06-91c4-a4a590c2fd6c"). InnerVolumeSpecName "kube-api-access-rhk64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.431237 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f3c575-721c-4e73-a4e3-e5497e1a3201-kube-api-access-zmww6" (OuterVolumeSpecName: "kube-api-access-zmww6") pod "99f3c575-721c-4e73-a4e3-e5497e1a3201" (UID: "99f3c575-721c-4e73-a4e3-e5497e1a3201"). InnerVolumeSpecName "kube-api-access-zmww6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.432185 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-scripts" (OuterVolumeSpecName: "scripts") pod "99f3c575-721c-4e73-a4e3-e5497e1a3201" (UID: "99f3c575-721c-4e73-a4e3-e5497e1a3201"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.444602 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "99f3c575-721c-4e73-a4e3-e5497e1a3201" (UID: "99f3c575-721c-4e73-a4e3-e5497e1a3201"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.452236 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "99f3c575-721c-4e73-a4e3-e5497e1a3201" (UID: "99f3c575-721c-4e73-a4e3-e5497e1a3201"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.470982 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99f3c575-721c-4e73-a4e3-e5497e1a3201" (UID: "99f3c575-721c-4e73-a4e3-e5497e1a3201"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.507125 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.507157 4874 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.507166 4874 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99f3c575-721c-4e73-a4e3-e5497e1a3201-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.507176 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.507184 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b30652-2359-4b06-91c4-a4a590c2fd6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.507193 4874 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99f3c575-721c-4e73-a4e3-e5497e1a3201-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.507204 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhk64\" (UniqueName: \"kubernetes.io/projected/70b30652-2359-4b06-91c4-a4a590c2fd6c-kube-api-access-rhk64\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.507215 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmww6\" (UniqueName: \"kubernetes.io/projected/99f3c575-721c-4e73-a4e3-e5497e1a3201-kube-api-access-zmww6\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.507224 4874 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99f3c575-721c-4e73-a4e3-e5497e1a3201-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.674239 4874 generic.go:334] "Generic (PLEG): container finished" podID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" containerID="a034db3c1ea552620fa0691a9a874a0d6c8f47608b6b427f485aa0e509c86b20" exitCode=0 Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.674339 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7eb994d5-6ecb-4a2d-bafc-86c9f107802c","Type":"ContainerDied","Data":"a034db3c1ea552620fa0691a9a874a0d6c8f47608b6b427f485aa0e509c86b20"} Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.676791 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" event={"ID":"70b30652-2359-4b06-91c4-a4a590c2fd6c","Type":"ContainerDied","Data":"777694d459fafcc00ec8f81ad118b5503f90df1220e03c14878c2c836c1054dd"} Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.676819 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="777694d459fafcc00ec8f81ad118b5503f90df1220e03c14878c2c836c1054dd" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.676870 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-phcqn" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.678395 4874 generic.go:334] "Generic (PLEG): container finished" podID="476813ee-f26a-4068-a5e9-87b5a20fece5" containerID="d0a20d9d2bae0c7e825b68fea651ef557c11736643886e7d9fc0aae9bd75ea87" exitCode=0 Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.678460 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"476813ee-f26a-4068-a5e9-87b5a20fece5","Type":"ContainerDied","Data":"d0a20d9d2bae0c7e825b68fea651ef557c11736643886e7d9fc0aae9bd75ea87"} Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.680007 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vj2t6" event={"ID":"99f3c575-721c-4e73-a4e3-e5497e1a3201","Type":"ContainerDied","Data":"fb5dfe0420d5b0cdcfe6476c132e69c7a6bbbf8fe429db1819298f0e19ea4841"} Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.680019 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vj2t6" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.680030 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb5dfe0420d5b0cdcfe6476c132e69c7a6bbbf8fe429db1819298f0e19ea4841" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.686436 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" event={"ID":"3dd992b7-793b-46be-a708-72097bb298cf","Type":"ContainerDied","Data":"735b25ecfdbdf77281eb99a6f517af7c590002f7d34a22e2dea205872ee50723"} Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.686470 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735b25ecfdbdf77281eb99a6f517af7c590002f7d34a22e2dea205872ee50723" Feb 17 16:23:33 crc kubenswrapper[4874]: I0217 16:23:33.686518 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d820-account-create-update-fs9ms" Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.490717 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.491976 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="thanos-sidecar" containerID="cri-o://aff618b7665199355e76c6ef7af2ae0dc1943e87544fee3f46a96c1356c5b919" gracePeriod=600 Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.491979 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="prometheus" containerID="cri-o://1cef8597cb86fe168d32c55daf5eaca9e8af13e478fafbe2f8238b4416a29571" gracePeriod=600 Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.492069 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="config-reloader" containerID="cri-o://c430359f571e0ad45b51112560ba29c1aad849d4b123cba6c2318ce17f34d5a0" gracePeriod=600 Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.697140 4874 generic.go:334] "Generic (PLEG): container finished" podID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerID="aff618b7665199355e76c6ef7af2ae0dc1943e87544fee3f46a96c1356c5b919" exitCode=0 Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.697197 4874 generic.go:334] "Generic (PLEG): container finished" podID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerID="c430359f571e0ad45b51112560ba29c1aad849d4b123cba6c2318ce17f34d5a0" exitCode=0 Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.697207 4874 generic.go:334] "Generic (PLEG): container finished" podID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerID="1cef8597cb86fe168d32c55daf5eaca9e8af13e478fafbe2f8238b4416a29571" exitCode=0 Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.697246 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerDied","Data":"aff618b7665199355e76c6ef7af2ae0dc1943e87544fee3f46a96c1356c5b919"} Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.697316 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerDied","Data":"c430359f571e0ad45b51112560ba29c1aad849d4b123cba6c2318ce17f34d5a0"} Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.697331 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerDied","Data":"1cef8597cb86fe168d32c55daf5eaca9e8af13e478fafbe2f8238b4416a29571"} Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.701736 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"476813ee-f26a-4068-a5e9-87b5a20fece5","Type":"ContainerStarted","Data":"db19969ed07d23d3e402cc5f7b337eabe216ce1076931c2f88d450fecfb27ff6"} Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.701950 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.706725 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7eb994d5-6ecb-4a2d-bafc-86c9f107802c","Type":"ContainerStarted","Data":"4b23f8baba9f1aa2ef43c2262a378fd2738a7a42f7e7dfa96e62d4362102dde4"} Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.707824 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.729433 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371964.12536 podStartE2EDuration="1m12.729414841s" podCreationTimestamp="2026-02-17 16:22:22 +0000 UTC" firstStartedPulling="2026-02-17 16:22:24.661289257 +0000 UTC m=+1154.955677818" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:34.721109945 +0000 UTC m=+1225.015498516" watchObservedRunningTime="2026-02-17 16:23:34.729414841 +0000 UTC m=+1225.023803392" Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.770119 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371964.084675 podStartE2EDuration="1m12.770100638s" podCreationTimestamp="2026-02-17 16:22:22 +0000 UTC" firstStartedPulling="2026-02-17 16:22:24.416904836 +0000 UTC m=+1154.711293397" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:34.763684209 +0000 UTC m=+1225.058072770" watchObservedRunningTime="2026-02-17 16:23:34.770100638 +0000 UTC m=+1225.064489209" Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.888822 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pqzbd"] Feb 17 16:23:34 crc kubenswrapper[4874]: I0217 16:23:34.903318 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pqzbd"] Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.034314 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:23:35 crc kubenswrapper[4874]: E0217 16:23:35.034916 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b30652-2359-4b06-91c4-a4a590c2fd6c" containerName="mariadb-database-create" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.034935 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b30652-2359-4b06-91c4-a4a590c2fd6c" containerName="mariadb-database-create" Feb 17 16:23:35 crc kubenswrapper[4874]: E0217 16:23:35.034956 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4929008-5bb8-4852-8c87-fa3203602206" containerName="mariadb-account-create-update" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.034963 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4929008-5bb8-4852-8c87-fa3203602206" containerName="mariadb-account-create-update" Feb 17 16:23:35 crc kubenswrapper[4874]: E0217 16:23:35.034977 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f3c575-721c-4e73-a4e3-e5497e1a3201" containerName="swift-ring-rebalance" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.034984 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f3c575-721c-4e73-a4e3-e5497e1a3201" containerName="swift-ring-rebalance" Feb 17 16:23:35 crc kubenswrapper[4874]: E0217 16:23:35.034998 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c7af35-e957-4bca-ba65-13b706314f83" containerName="dnsmasq-dns" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.035016 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c7af35-e957-4bca-ba65-13b706314f83" containerName="dnsmasq-dns" Feb 17 16:23:35 crc kubenswrapper[4874]: E0217 16:23:35.035023 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c7af35-e957-4bca-ba65-13b706314f83" containerName="init" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.035029 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c7af35-e957-4bca-ba65-13b706314f83" containerName="init" Feb 17 16:23:35 crc kubenswrapper[4874]: E0217 16:23:35.035039 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd992b7-793b-46be-a708-72097bb298cf" containerName="mariadb-account-create-update" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.035045 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd992b7-793b-46be-a708-72097bb298cf" containerName="mariadb-account-create-update" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.035224 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c7af35-e957-4bca-ba65-13b706314f83" containerName="dnsmasq-dns" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.035232 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b30652-2359-4b06-91c4-a4a590c2fd6c" containerName="mariadb-database-create" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.035242 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4929008-5bb8-4852-8c87-fa3203602206" containerName="mariadb-account-create-update" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.035252 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd992b7-793b-46be-a708-72097bb298cf" containerName="mariadb-account-create-update" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.035267 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f3c575-721c-4e73-a4e3-e5497e1a3201" containerName="swift-ring-rebalance" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.035894 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.039443 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.068517 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.144163 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bclcg\" (UniqueName: \"kubernetes.io/projected/ac66947f-056d-4e83-bcdb-577f72ea0350-kube-api-access-bclcg\") pod \"mysqld-exporter-0\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.144352 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.144500 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-config-data\") pod \"mysqld-exporter-0\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.247884 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.247952 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-config-data\") pod \"mysqld-exporter-0\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.248045 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bclcg\" (UniqueName: \"kubernetes.io/projected/ac66947f-056d-4e83-bcdb-577f72ea0350-kube-api-access-bclcg\") pod \"mysqld-exporter-0\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.266393 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-config-data\") pod \"mysqld-exporter-0\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.266478 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.274950 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bclcg\" (UniqueName: \"kubernetes.io/projected/ac66947f-056d-4e83-bcdb-577f72ea0350-kube-api-access-bclcg\") pod \"mysqld-exporter-0\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.356583 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:23:35 crc kubenswrapper[4874]: E0217 16:23:35.539694 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7dc41e_9863_4c74_8675_56fca22db08a.slice/crio-conmon-b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.746254 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.746303 4874 generic.go:334] "Generic (PLEG): container finished" podID="ed7dc41e-9863-4c74-8675-56fca22db08a" containerID="b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c" exitCode=0 Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.746328 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ed7dc41e-9863-4c74-8675-56fca22db08a","Type":"ContainerDied","Data":"b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c"} Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.751460 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.751625 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115","Type":"ContainerDied","Data":"39c6ca5de943e2cccd7136cfbea872d2d6363551e0ff88c1bea174cfeb8db85c"} Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.751655 4874 scope.go:117] "RemoveContainer" containerID="aff618b7665199355e76c6ef7af2ae0dc1943e87544fee3f46a96c1356c5b919" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.805158 4874 scope.go:117] "RemoveContainer" containerID="c430359f571e0ad45b51112560ba29c1aad849d4b123cba6c2318ce17f34d5a0" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.839279 4874 scope.go:117] "RemoveContainer" containerID="1cef8597cb86fe168d32c55daf5eaca9e8af13e478fafbe2f8238b4416a29571" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.872189 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-web-config\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.872234 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn8rm\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-kube-api-access-gn8rm\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.872291 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config-out\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.872332 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-thanos-prometheus-http-client-file\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.872400 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-1\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.872809 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-2\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.872984 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.873038 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-0\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.873097 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.873142 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-tls-assets\") pod \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\" (UID: \"c5df12a4-f6fc-46fb-b65a-ccf21a5bf115\") " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.873899 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.874031 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.874539 4874 scope.go:117] "RemoveContainer" containerID="1a87819e09eaea64427f7e197d0a167a1165fd8d7a26e1ea28d3e7aa5a7ce4f6" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.875643 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.877404 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.878709 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config" (OuterVolumeSpecName: "config") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.878972 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-kube-api-access-gn8rm" (OuterVolumeSpecName: "kube-api-access-gn8rm") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "kube-api-access-gn8rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.881592 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config-out" (OuterVolumeSpecName: "config-out") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.882873 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.897920 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "pvc-7d513998-97d5-40a9-af0e-749b510e28ad". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.906001 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-web-config" (OuterVolumeSpecName: "web-config") pod "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" (UID: "c5df12a4-f6fc-46fb-b65a-ccf21a5bf115"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975223 4874 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975261 4874 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975298 4874 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") on node \"crc\" " Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975313 4874 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975325 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975335 4874 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975348 4874 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975359 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn8rm\" (UniqueName: \"kubernetes.io/projected/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-kube-api-access-gn8rm\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975368 4874 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.975401 4874 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:35 crc kubenswrapper[4874]: I0217 16:23:35.987916 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:23:35 crc kubenswrapper[4874]: W0217 16:23:35.996751 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac66947f_056d_4e83_bcdb_577f72ea0350.slice/crio-e0ac6c6b8341e77569fe4b2ccc09c8021e2e3a05618e4cd78919574e89f470d2 WatchSource:0}: Error finding container e0ac6c6b8341e77569fe4b2ccc09c8021e2e3a05618e4cd78919574e89f470d2: Status 404 returned error can't find the container with id e0ac6c6b8341e77569fe4b2ccc09c8021e2e3a05618e4cd78919574e89f470d2 Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.003448 4874 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.003633 4874 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7d513998-97d5-40a9-af0e-749b510e28ad" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad") on node "crc" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.078624 4874 reconciler_common.go:293] "Volume detached for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.103935 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.112950 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.130922 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:23:36 crc kubenswrapper[4874]: E0217 16:23:36.131335 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="prometheus" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.131353 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="prometheus" Feb 17 16:23:36 crc kubenswrapper[4874]: E0217 16:23:36.131372 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="config-reloader" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.131379 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="config-reloader" Feb 17 16:23:36 crc kubenswrapper[4874]: E0217 16:23:36.131401 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="thanos-sidecar" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.131407 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="thanos-sidecar" Feb 17 16:23:36 crc kubenswrapper[4874]: E0217 16:23:36.131423 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="init-config-reloader" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.131429 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="init-config-reloader" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.131597 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="config-reloader" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.131613 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="prometheus" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.131633 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="thanos-sidecar" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.133342 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.139174 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.144370 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.144629 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-98kh9" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.144767 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.144888 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.145633 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.145663 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.146208 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.149365 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.154830 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.181374 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.181443 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.181486 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8e1e887d-4629-4a8a-812f-4f6f2d101249-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.181588 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.182653 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qql\" (UniqueName: \"kubernetes.io/projected/8e1e887d-4629-4a8a-812f-4f6f2d101249-kube-api-access-g2qql\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.182685 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.183098 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.183126 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8e1e887d-4629-4a8a-812f-4f6f2d101249-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.183164 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-config\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.183202 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8e1e887d-4629-4a8a-812f-4f6f2d101249-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.183249 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8e1e887d-4629-4a8a-812f-4f6f2d101249-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.183345 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8e1e887d-4629-4a8a-812f-4f6f2d101249-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.183385 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.285762 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.285841 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2qql\" (UniqueName: \"kubernetes.io/projected/8e1e887d-4629-4a8a-812f-4f6f2d101249-kube-api-access-g2qql\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.285868 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.285912 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.285932 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8e1e887d-4629-4a8a-812f-4f6f2d101249-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.285950 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-config\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.286066 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8e1e887d-4629-4a8a-812f-4f6f2d101249-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.286720 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8e1e887d-4629-4a8a-812f-4f6f2d101249-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.286779 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8e1e887d-4629-4a8a-812f-4f6f2d101249-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.286790 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8e1e887d-4629-4a8a-812f-4f6f2d101249-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.286831 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8e1e887d-4629-4a8a-812f-4f6f2d101249-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.286861 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.286903 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.286929 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.286946 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8e1e887d-4629-4a8a-812f-4f6f2d101249-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.287618 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8e1e887d-4629-4a8a-812f-4f6f2d101249-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.290610 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.291144 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8e1e887d-4629-4a8a-812f-4f6f2d101249-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.291750 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.293453 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-config\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.293958 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.294623 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bac90798cae0603f2cddffed7e2fcd4826a4a45d6415d5b4e65c98946b029a54/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.299859 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8e1e887d-4629-4a8a-812f-4f6f2d101249-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.302460 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.302690 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.303044 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8e1e887d-4629-4a8a-812f-4f6f2d101249-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.309557 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2qql\" (UniqueName: \"kubernetes.io/projected/8e1e887d-4629-4a8a-812f-4f6f2d101249-kube-api-access-g2qql\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.358127 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7d513998-97d5-40a9-af0e-749b510e28ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d513998-97d5-40a9-af0e-749b510e28ad\") pod \"prometheus-metric-storage-0\" (UID: \"8e1e887d-4629-4a8a-812f-4f6f2d101249\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.467503 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.467965 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4929008-5bb8-4852-8c87-fa3203602206" path="/var/lib/kubelet/pods/c4929008-5bb8-4852-8c87-fa3203602206/volumes" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.468612 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" path="/var/lib/kubelet/pods/c5df12a4-f6fc-46fb-b65a-ccf21a5bf115/volumes" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.790705 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ed7dc41e-9863-4c74-8675-56fca22db08a","Type":"ContainerStarted","Data":"1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b"} Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.790973 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.793829 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ac66947f-056d-4e83-bcdb-577f72ea0350","Type":"ContainerStarted","Data":"e0ac6c6b8341e77569fe4b2ccc09c8021e2e3a05618e4cd78919574e89f470d2"} Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.816532 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371962.038263 podStartE2EDuration="1m14.816513381s" podCreationTimestamp="2026-02-17 16:22:22 +0000 UTC" firstStartedPulling="2026-02-17 16:22:24.244414126 +0000 UTC m=+1154.538802687" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:36.816426288 +0000 UTC m=+1227.110814859" watchObservedRunningTime="2026-02-17 16:23:36.816513381 +0000 UTC m=+1227.110901942" Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.846009 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tpgc2" podUID="4132e8e3-7498-4df0-9d6d-2dd7c096218a" containerName="ovn-controller" probeResult="failure" output=< Feb 17 16:23:36 crc kubenswrapper[4874]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 16:23:36 crc kubenswrapper[4874]: > Feb 17 16:23:36 crc kubenswrapper[4874]: I0217 16:23:36.890472 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pzc25" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.059194 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.125245 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tpgc2-config-9bwxr"] Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.126802 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.129287 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.145918 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tpgc2-config-9bwxr"] Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.180144 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.203667 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.203727 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfh9\" (UniqueName: \"kubernetes.io/projected/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-kube-api-access-qrfh9\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.203798 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run-ovn\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.203824 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-scripts\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.203842 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-additional-scripts\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.203896 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-log-ovn\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.305955 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.306013 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfh9\" (UniqueName: \"kubernetes.io/projected/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-kube-api-access-qrfh9\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.306092 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run-ovn\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.306121 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-scripts\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.306138 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-additional-scripts\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.306190 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-log-ovn\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.306363 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-log-ovn\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.306408 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run-ovn\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.307041 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-additional-scripts\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.307239 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.308303 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-scripts\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.335555 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfh9\" (UniqueName: \"kubernetes.io/projected/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-kube-api-access-qrfh9\") pod \"ovn-controller-tpgc2-config-9bwxr\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: I0217 16:23:37.457486 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:37 crc kubenswrapper[4874]: W0217 16:23:37.859879 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e1e887d_4629_4a8a_812f_4f6f2d101249.slice/crio-2ccb3a51e4b6fbed4854d384f1ff9df4e581dccb4596fcfcfa528d607b40f233 WatchSource:0}: Error finding container 2ccb3a51e4b6fbed4854d384f1ff9df4e581dccb4596fcfcfa528d607b40f233: Status 404 returned error can't find the container with id 2ccb3a51e4b6fbed4854d384f1ff9df4e581dccb4596fcfcfa528d607b40f233 Feb 17 16:23:38 crc kubenswrapper[4874]: I0217 16:23:38.496474 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tpgc2-config-9bwxr"] Feb 17 16:23:38 crc kubenswrapper[4874]: I0217 16:23:38.735917 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c5df12a4-f6fc-46fb-b65a-ccf21a5bf115" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:23:38 crc kubenswrapper[4874]: I0217 16:23:38.811656 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8e1e887d-4629-4a8a-812f-4f6f2d101249","Type":"ContainerStarted","Data":"2ccb3a51e4b6fbed4854d384f1ff9df4e581dccb4596fcfcfa528d607b40f233"} Feb 17 16:23:38 crc kubenswrapper[4874]: I0217 16:23:38.813679 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ac66947f-056d-4e83-bcdb-577f72ea0350","Type":"ContainerStarted","Data":"8ce67f4a16e5fed5934bb6f3a35e742f0ba7dc6cba55c72aaed09323d5622f48"} Feb 17 16:23:38 crc kubenswrapper[4874]: I0217 16:23:38.834957 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.9181896859999998 podStartE2EDuration="3.834939729s" podCreationTimestamp="2026-02-17 16:23:35 +0000 UTC" firstStartedPulling="2026-02-17 16:23:36.000006506 +0000 UTC m=+1226.294395077" lastFinishedPulling="2026-02-17 16:23:37.916756559 +0000 UTC m=+1228.211145120" observedRunningTime="2026-02-17 16:23:38.828219213 +0000 UTC m=+1229.122607794" watchObservedRunningTime="2026-02-17 16:23:38.834939729 +0000 UTC m=+1229.129328290" Feb 17 16:23:39 crc kubenswrapper[4874]: I0217 16:23:39.902243 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tvbr5"] Feb 17 16:23:39 crc kubenswrapper[4874]: I0217 16:23:39.904929 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:39 crc kubenswrapper[4874]: I0217 16:23:39.907640 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:23:39 crc kubenswrapper[4874]: I0217 16:23:39.918648 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tvbr5"] Feb 17 16:23:40 crc kubenswrapper[4874]: I0217 16:23:40.059476 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b971b7-2d31-4e4e-a182-234689e298be-operator-scripts\") pod \"root-account-create-update-tvbr5\" (UID: \"e6b971b7-2d31-4e4e-a182-234689e298be\") " pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:40 crc kubenswrapper[4874]: I0217 16:23:40.059859 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb888\" (UniqueName: \"kubernetes.io/projected/e6b971b7-2d31-4e4e-a182-234689e298be-kube-api-access-bb888\") pod \"root-account-create-update-tvbr5\" (UID: \"e6b971b7-2d31-4e4e-a182-234689e298be\") " pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:40 crc kubenswrapper[4874]: I0217 16:23:40.161741 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb888\" (UniqueName: \"kubernetes.io/projected/e6b971b7-2d31-4e4e-a182-234689e298be-kube-api-access-bb888\") pod \"root-account-create-update-tvbr5\" (UID: \"e6b971b7-2d31-4e4e-a182-234689e298be\") " pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:40 crc kubenswrapper[4874]: I0217 16:23:40.161843 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b971b7-2d31-4e4e-a182-234689e298be-operator-scripts\") pod \"root-account-create-update-tvbr5\" (UID: \"e6b971b7-2d31-4e4e-a182-234689e298be\") " pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:40 crc kubenswrapper[4874]: I0217 16:23:40.162497 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b971b7-2d31-4e4e-a182-234689e298be-operator-scripts\") pod \"root-account-create-update-tvbr5\" (UID: \"e6b971b7-2d31-4e4e-a182-234689e298be\") " pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:40 crc kubenswrapper[4874]: I0217 16:23:40.188988 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb888\" (UniqueName: \"kubernetes.io/projected/e6b971b7-2d31-4e4e-a182-234689e298be-kube-api-access-bb888\") pod \"root-account-create-update-tvbr5\" (UID: \"e6b971b7-2d31-4e4e-a182-234689e298be\") " pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:40 crc kubenswrapper[4874]: I0217 16:23:40.225520 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:41 crc kubenswrapper[4874]: I0217 16:23:41.809435 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tpgc2" podUID="4132e8e3-7498-4df0-9d6d-2dd7c096218a" containerName="ovn-controller" probeResult="failure" output=< Feb 17 16:23:41 crc kubenswrapper[4874]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 16:23:41 crc kubenswrapper[4874]: > Feb 17 16:23:42 crc kubenswrapper[4874]: I0217 16:23:42.513224 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:42 crc kubenswrapper[4874]: I0217 16:23:42.555405 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7fda3013-2526-48c1-ba34-9e8d1bb33e9f-etc-swift\") pod \"swift-storage-0\" (UID: \"7fda3013-2526-48c1-ba34-9e8d1bb33e9f\") " pod="openstack/swift-storage-0" Feb 17 16:23:42 crc kubenswrapper[4874]: I0217 16:23:42.738214 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:23:42 crc kubenswrapper[4874]: I0217 16:23:42.851463 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8e1e887d-4629-4a8a-812f-4f6f2d101249","Type":"ContainerStarted","Data":"dce460c6aa9b9e4ea182dad1913a7e512cb52a50dff659cfa6fbbd19c23c48d8"} Feb 17 16:23:43 crc kubenswrapper[4874]: I0217 16:23:43.782539 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:23:43 crc kubenswrapper[4874]: I0217 16:23:43.893301 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 17 16:23:43 crc kubenswrapper[4874]: I0217 16:23:43.932484 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:23:45 crc kubenswrapper[4874]: I0217 16:23:45.931273 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tpgc2-config-9bwxr" event={"ID":"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51","Type":"ContainerStarted","Data":"cd311346162e932e5d1fd8b6874888406d16b507658095e04e9b208484c99183"} Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.297194 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tvbr5"] Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.382949 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:23:46 crc kubenswrapper[4874]: W0217 16:23:46.388437 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fda3013_2526_48c1_ba34_9e8d1bb33e9f.slice/crio-8db3d8ebfcad2572ba490d43a9d9fd77934d2d3767808506cd12a30373c706f8 WatchSource:0}: Error finding container 8db3d8ebfcad2572ba490d43a9d9fd77934d2d3767808506cd12a30373c706f8: Status 404 returned error can't find the container with id 8db3d8ebfcad2572ba490d43a9d9fd77934d2d3767808506cd12a30373c706f8 Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.803751 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-tpgc2" Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.954497 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"8db3d8ebfcad2572ba490d43a9d9fd77934d2d3767808506cd12a30373c706f8"} Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.960161 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x2mrg" event={"ID":"f6c4fb02-268b-4640-9a46-1f107a1fcc28","Type":"ContainerStarted","Data":"7899d237ac79dd5a613b7055e0349da2bd9a162acfbe2d11c2c8c12edf2269cb"} Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.964219 4874 generic.go:334] "Generic (PLEG): container finished" podID="e6b971b7-2d31-4e4e-a182-234689e298be" containerID="1917ca2196cf0e8476ed23b9bba6843ad2d8da44748f5cedec22db4473e5654f" exitCode=0 Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.964421 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvbr5" event={"ID":"e6b971b7-2d31-4e4e-a182-234689e298be","Type":"ContainerDied","Data":"1917ca2196cf0e8476ed23b9bba6843ad2d8da44748f5cedec22db4473e5654f"} Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.964468 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvbr5" event={"ID":"e6b971b7-2d31-4e4e-a182-234689e298be","Type":"ContainerStarted","Data":"d79aa265bfc978718314edfeeb02180308a912543bf2978906f72168dcd391f0"} Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.967917 4874 generic.go:334] "Generic (PLEG): container finished" podID="146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" containerID="15990092c92445fbcc169c1a91cef0c94d89178fd15d1a451c63e9fab92a6145" exitCode=0 Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.967974 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tpgc2-config-9bwxr" event={"ID":"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51","Type":"ContainerDied","Data":"15990092c92445fbcc169c1a91cef0c94d89178fd15d1a451c63e9fab92a6145"} Feb 17 16:23:46 crc kubenswrapper[4874]: I0217 16:23:46.985666 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-x2mrg" podStartSLOduration=2.385253437 podStartE2EDuration="19.985646472s" podCreationTimestamp="2026-02-17 16:23:27 +0000 UTC" firstStartedPulling="2026-02-17 16:23:28.260019142 +0000 UTC m=+1218.554407703" lastFinishedPulling="2026-02-17 16:23:45.860412167 +0000 UTC m=+1236.154800738" observedRunningTime="2026-02-17 16:23:46.976338444 +0000 UTC m=+1237.270727035" watchObservedRunningTime="2026-02-17 16:23:46.985646472 +0000 UTC m=+1237.280035043" Feb 17 16:23:47 crc kubenswrapper[4874]: I0217 16:23:47.984664 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"d027ecfd39d08d45f0268ac426eb006330deb0150a3dfadeb998a41ab8f869d7"} Feb 17 16:23:47 crc kubenswrapper[4874]: I0217 16:23:47.984999 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"f52c4fb662f4d8ad1d918ef7610b63df91fe17880b3e85d17c785a326b9882a0"} Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.449241 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.479245 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.547280 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-additional-scripts\") pod \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.547511 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run\") pod \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.547624 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-log-ovn\") pod \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.547737 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrfh9\" (UniqueName: \"kubernetes.io/projected/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-kube-api-access-qrfh9\") pod \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.547851 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run-ovn\") pod \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.548028 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb888\" (UniqueName: \"kubernetes.io/projected/e6b971b7-2d31-4e4e-a182-234689e298be-kube-api-access-bb888\") pod \"e6b971b7-2d31-4e4e-a182-234689e298be\" (UID: \"e6b971b7-2d31-4e4e-a182-234689e298be\") " Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.548219 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-scripts\") pod \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\" (UID: \"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51\") " Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.548308 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b971b7-2d31-4e4e-a182-234689e298be-operator-scripts\") pod \"e6b971b7-2d31-4e4e-a182-234689e298be\" (UID: \"e6b971b7-2d31-4e4e-a182-234689e298be\") " Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.550057 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" (UID: "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.550269 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" (UID: "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.550335 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run" (OuterVolumeSpecName: "var-run") pod "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" (UID: "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.550358 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" (UID: "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.550961 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b971b7-2d31-4e4e-a182-234689e298be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6b971b7-2d31-4e4e-a182-234689e298be" (UID: "e6b971b7-2d31-4e4e-a182-234689e298be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.551246 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-scripts" (OuterVolumeSpecName: "scripts") pod "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" (UID: "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.554116 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-kube-api-access-qrfh9" (OuterVolumeSpecName: "kube-api-access-qrfh9") pod "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" (UID: "146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51"). InnerVolumeSpecName "kube-api-access-qrfh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.555678 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b971b7-2d31-4e4e-a182-234689e298be-kube-api-access-bb888" (OuterVolumeSpecName: "kube-api-access-bb888") pod "e6b971b7-2d31-4e4e-a182-234689e298be" (UID: "e6b971b7-2d31-4e4e-a182-234689e298be"). InnerVolumeSpecName "kube-api-access-bb888". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.651283 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb888\" (UniqueName: \"kubernetes.io/projected/e6b971b7-2d31-4e4e-a182-234689e298be-kube-api-access-bb888\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.651338 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.651358 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b971b7-2d31-4e4e-a182-234689e298be-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.651376 4874 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.651393 4874 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.651409 4874 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.651431 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrfh9\" (UniqueName: \"kubernetes.io/projected/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-kube-api-access-qrfh9\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:48 crc kubenswrapper[4874]: I0217 16:23:48.651449 4874 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.025450 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvbr5" event={"ID":"e6b971b7-2d31-4e4e-a182-234689e298be","Type":"ContainerDied","Data":"d79aa265bfc978718314edfeeb02180308a912543bf2978906f72168dcd391f0"} Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.025501 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d79aa265bfc978718314edfeeb02180308a912543bf2978906f72168dcd391f0" Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.026993 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvbr5" Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.028553 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tpgc2-config-9bwxr" Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.028561 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tpgc2-config-9bwxr" event={"ID":"146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51","Type":"ContainerDied","Data":"cd311346162e932e5d1fd8b6874888406d16b507658095e04e9b208484c99183"} Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.028854 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd311346162e932e5d1fd8b6874888406d16b507658095e04e9b208484c99183" Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.030387 4874 generic.go:334] "Generic (PLEG): container finished" podID="8e1e887d-4629-4a8a-812f-4f6f2d101249" containerID="dce460c6aa9b9e4ea182dad1913a7e512cb52a50dff659cfa6fbbd19c23c48d8" exitCode=0 Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.030463 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8e1e887d-4629-4a8a-812f-4f6f2d101249","Type":"ContainerDied","Data":"dce460c6aa9b9e4ea182dad1913a7e512cb52a50dff659cfa6fbbd19c23c48d8"} Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.034991 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"e67c9fbd9d8f7c63401cdd0b1b1fc67fb20704c077052251949a62b605248785"} Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.035049 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"2118f5e484067262c128796367a8679b8877e08f2f2c8dd563a58644c1f1b501"} Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.586037 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-tpgc2-config-9bwxr"] Feb 17 16:23:49 crc kubenswrapper[4874]: I0217 16:23:49.595969 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-tpgc2-config-9bwxr"] Feb 17 16:23:50 crc kubenswrapper[4874]: I0217 16:23:50.049254 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8e1e887d-4629-4a8a-812f-4f6f2d101249","Type":"ContainerStarted","Data":"626d55fb207a739bca91b27f53e612b99871ee9de6b5bb8e3fa798976f5ecca0"} Feb 17 16:23:50 crc kubenswrapper[4874]: I0217 16:23:50.481821 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" path="/var/lib/kubelet/pods/146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51/volumes" Feb 17 16:23:52 crc kubenswrapper[4874]: I0217 16:23:52.075484 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"f4ef10a701f54d0b051b9229460f2b30d9e3a29f7d5f5c3815806cb571cbd77a"} Feb 17 16:23:52 crc kubenswrapper[4874]: I0217 16:23:52.075960 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"b96feb475fab431e5503a1344c39355b8544da39cbf02c7068ae2524fb2e8119"} Feb 17 16:23:52 crc kubenswrapper[4874]: I0217 16:23:52.075976 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"03e8650ef2da521878867d5a402846e2c735622e41517d904f11c8f5a02dfa1c"} Feb 17 16:23:52 crc kubenswrapper[4874]: I0217 16:23:52.075989 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"edb0322a26da99e536b8dd326f94face831f5d2e0bb3c9c5ab4e0e023756ce25"} Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.560707 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.908022 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fh6cg"] Feb 17 16:23:53 crc kubenswrapper[4874]: E0217 16:23:53.915561 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b971b7-2d31-4e4e-a182-234689e298be" containerName="mariadb-account-create-update" Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.915582 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b971b7-2d31-4e4e-a182-234689e298be" containerName="mariadb-account-create-update" Feb 17 16:23:53 crc kubenswrapper[4874]: E0217 16:23:53.915621 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" containerName="ovn-config" Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.915628 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" containerName="ovn-config" Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.915811 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b971b7-2d31-4e4e-a182-234689e298be" containerName="mariadb-account-create-update" Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.915829 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="146bdac8-10ab-4e2f-a7b6-e2fd4edc8c51" containerName="ovn-config" Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.916552 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.938955 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fh6cg"] Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.971216 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e865ad98-6d8f-4a54-9717-10028d7c52d1-operator-scripts\") pod \"cinder-db-create-fh6cg\" (UID: \"e865ad98-6d8f-4a54-9717-10028d7c52d1\") " pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:53 crc kubenswrapper[4874]: I0217 16:23:53.971300 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhplh\" (UniqueName: \"kubernetes.io/projected/e865ad98-6d8f-4a54-9717-10028d7c52d1-kube-api-access-zhplh\") pod \"cinder-db-create-fh6cg\" (UID: \"e865ad98-6d8f-4a54-9717-10028d7c52d1\") " pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.072659 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhplh\" (UniqueName: \"kubernetes.io/projected/e865ad98-6d8f-4a54-9717-10028d7c52d1-kube-api-access-zhplh\") pod \"cinder-db-create-fh6cg\" (UID: \"e865ad98-6d8f-4a54-9717-10028d7c52d1\") " pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.072847 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e865ad98-6d8f-4a54-9717-10028d7c52d1-operator-scripts\") pod \"cinder-db-create-fh6cg\" (UID: \"e865ad98-6d8f-4a54-9717-10028d7c52d1\") " pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.073680 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e865ad98-6d8f-4a54-9717-10028d7c52d1-operator-scripts\") pod \"cinder-db-create-fh6cg\" (UID: \"e865ad98-6d8f-4a54-9717-10028d7c52d1\") " pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.097280 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhplh\" (UniqueName: \"kubernetes.io/projected/e865ad98-6d8f-4a54-9717-10028d7c52d1-kube-api-access-zhplh\") pod \"cinder-db-create-fh6cg\" (UID: \"e865ad98-6d8f-4a54-9717-10028d7c52d1\") " pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.098991 4874 generic.go:334] "Generic (PLEG): container finished" podID="f6c4fb02-268b-4640-9a46-1f107a1fcc28" containerID="7899d237ac79dd5a613b7055e0349da2bd9a162acfbe2d11c2c8c12edf2269cb" exitCode=0 Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.099055 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x2mrg" event={"ID":"f6c4fb02-268b-4640-9a46-1f107a1fcc28","Type":"ContainerDied","Data":"7899d237ac79dd5a613b7055e0349da2bd9a162acfbe2d11c2c8c12edf2269cb"} Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.102802 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8e1e887d-4629-4a8a-812f-4f6f2d101249","Type":"ContainerStarted","Data":"7d85477f4140de07bac0043b93c738060d6b342dab263316618c55c171666f6d"} Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.102838 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8e1e887d-4629-4a8a-812f-4f6f2d101249","Type":"ContainerStarted","Data":"437afb58987960d98c72c468513d036268a645038cb23e2b76ef22de4851134c"} Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.107682 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"1c754bdd6beceeae8c125799a356f842a1a7b1f8af262121fc42708906e130ab"} Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.107720 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"0d795a2f815e0f94ababed16f73006006ae695d3c73f30983db533fc7a07c203"} Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.203063 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-r6kzp"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.204367 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.212049 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-e1c7-account-create-update-cfrvb"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.213377 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.217590 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.239587 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-e1c7-account-create-update-cfrvb"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.242870 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.242853569 podStartE2EDuration="18.242853569s" podCreationTimestamp="2026-02-17 16:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:54.217469999 +0000 UTC m=+1244.511858570" watchObservedRunningTime="2026-02-17 16:23:54.242853569 +0000 UTC m=+1244.537242130" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.243308 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.294136 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-r6kzp"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.335112 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-14b5-account-create-update-jtbph"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.336657 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.342395 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.367142 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-14b5-account-create-update-jtbph"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.379373 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2swvn\" (UniqueName: \"kubernetes.io/projected/3a9b479f-3960-4878-a2a9-48ac751b4149-kube-api-access-2swvn\") pod \"heat-e1c7-account-create-update-cfrvb\" (UID: \"3a9b479f-3960-4878-a2a9-48ac751b4149\") " pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.379539 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9b479f-3960-4878-a2a9-48ac751b4149-operator-scripts\") pod \"heat-e1c7-account-create-update-cfrvb\" (UID: \"3a9b479f-3960-4878-a2a9-48ac751b4149\") " pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.379595 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3d3d3a-23a3-420e-9651-edf451bc3606-operator-scripts\") pod \"heat-db-create-r6kzp\" (UID: \"fb3d3d3a-23a3-420e-9651-edf451bc3606\") " pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.379790 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9bl9\" (UniqueName: \"kubernetes.io/projected/fb3d3d3a-23a3-420e-9651-edf451bc3606-kube-api-access-q9bl9\") pod \"heat-db-create-r6kzp\" (UID: \"fb3d3d3a-23a3-420e-9651-edf451bc3606\") " pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.482414 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-operator-scripts\") pod \"cinder-14b5-account-create-update-jtbph\" (UID: \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\") " pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.482469 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njb69\" (UniqueName: \"kubernetes.io/projected/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-kube-api-access-njb69\") pod \"cinder-14b5-account-create-update-jtbph\" (UID: \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\") " pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.482517 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9bl9\" (UniqueName: \"kubernetes.io/projected/fb3d3d3a-23a3-420e-9651-edf451bc3606-kube-api-access-q9bl9\") pod \"heat-db-create-r6kzp\" (UID: \"fb3d3d3a-23a3-420e-9651-edf451bc3606\") " pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.482576 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2swvn\" (UniqueName: \"kubernetes.io/projected/3a9b479f-3960-4878-a2a9-48ac751b4149-kube-api-access-2swvn\") pod \"heat-e1c7-account-create-update-cfrvb\" (UID: \"3a9b479f-3960-4878-a2a9-48ac751b4149\") " pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.482649 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9b479f-3960-4878-a2a9-48ac751b4149-operator-scripts\") pod \"heat-e1c7-account-create-update-cfrvb\" (UID: \"3a9b479f-3960-4878-a2a9-48ac751b4149\") " pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.482671 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3d3d3a-23a3-420e-9651-edf451bc3606-operator-scripts\") pod \"heat-db-create-r6kzp\" (UID: \"fb3d3d3a-23a3-420e-9651-edf451bc3606\") " pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.486705 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9b479f-3960-4878-a2a9-48ac751b4149-operator-scripts\") pod \"heat-e1c7-account-create-update-cfrvb\" (UID: \"3a9b479f-3960-4878-a2a9-48ac751b4149\") " pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.494247 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-56cvq"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.495968 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3d3d3a-23a3-420e-9651-edf451bc3606-operator-scripts\") pod \"heat-db-create-r6kzp\" (UID: \"fb3d3d3a-23a3-420e-9651-edf451bc3606\") " pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.496029 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.520269 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2swvn\" (UniqueName: \"kubernetes.io/projected/3a9b479f-3960-4878-a2a9-48ac751b4149-kube-api-access-2swvn\") pod \"heat-e1c7-account-create-update-cfrvb\" (UID: \"3a9b479f-3960-4878-a2a9-48ac751b4149\") " pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.524254 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9bl9\" (UniqueName: \"kubernetes.io/projected/fb3d3d3a-23a3-420e-9651-edf451bc3606-kube-api-access-q9bl9\") pod \"heat-db-create-r6kzp\" (UID: \"fb3d3d3a-23a3-420e-9651-edf451bc3606\") " pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.530147 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-56cvq"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.533570 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.584374 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-operator-scripts\") pod \"neutron-db-create-56cvq\" (UID: \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\") " pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.584448 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xhbg\" (UniqueName: \"kubernetes.io/projected/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-kube-api-access-8xhbg\") pod \"neutron-db-create-56cvq\" (UID: \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\") " pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.584553 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-operator-scripts\") pod \"cinder-14b5-account-create-update-jtbph\" (UID: \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\") " pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.584725 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njb69\" (UniqueName: \"kubernetes.io/projected/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-kube-api-access-njb69\") pod \"cinder-14b5-account-create-update-jtbph\" (UID: \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\") " pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.591022 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-operator-scripts\") pod \"cinder-14b5-account-create-update-jtbph\" (UID: \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\") " pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.612731 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njb69\" (UniqueName: \"kubernetes.io/projected/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-kube-api-access-njb69\") pod \"cinder-14b5-account-create-update-jtbph\" (UID: \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\") " pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.618731 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-7btxx"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.621278 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.633853 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.634113 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7tb22" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.634265 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.634413 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.658041 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bj96s"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.685289 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.711293 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-operator-scripts\") pod \"neutron-db-create-56cvq\" (UID: \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\") " pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.711419 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xhbg\" (UniqueName: \"kubernetes.io/projected/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-kube-api-access-8xhbg\") pod \"neutron-db-create-56cvq\" (UID: \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\") " pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.712516 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.712792 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-operator-scripts\") pod \"neutron-db-create-56cvq\" (UID: \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\") " pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.734403 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d557-account-create-update-jvmth"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.740324 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.743385 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.764792 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xhbg\" (UniqueName: \"kubernetes.io/projected/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-kube-api-access-8xhbg\") pod \"neutron-db-create-56cvq\" (UID: \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\") " pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.764857 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bj96s"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.815218 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7btxx"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.816518 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-combined-ca-bundle\") pod \"keystone-db-sync-7btxx\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.816589 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-config-data\") pod \"keystone-db-sync-7btxx\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.816647 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvjrt\" (UniqueName: \"kubernetes.io/projected/41f01982-4445-4662-998f-bc618d020727-kube-api-access-tvjrt\") pod \"keystone-db-sync-7btxx\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.816664 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8cxl\" (UniqueName: \"kubernetes.io/projected/34c21838-f8c0-4d47-8ccf-a92ff6452532-kube-api-access-w8cxl\") pod \"barbican-db-create-bj96s\" (UID: \"34c21838-f8c0-4d47-8ccf-a92ff6452532\") " pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.816802 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34c21838-f8c0-4d47-8ccf-a92ff6452532-operator-scripts\") pod \"barbican-db-create-bj96s\" (UID: \"34c21838-f8c0-4d47-8ccf-a92ff6452532\") " pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.823776 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.831207 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d557-account-create-update-jvmth"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.848901 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b55-account-create-update-cs68x"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.860177 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.866943 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b55-account-create-update-cs68x"] Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.867027 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.877194 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.924894 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jvf\" (UniqueName: \"kubernetes.io/projected/29f331e0-01bd-4693-a5fd-46739a5ddec4-kube-api-access-88jvf\") pod \"barbican-d557-account-create-update-jvmth\" (UID: \"29f331e0-01bd-4693-a5fd-46739a5ddec4\") " pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.925197 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvjrt\" (UniqueName: \"kubernetes.io/projected/41f01982-4445-4662-998f-bc618d020727-kube-api-access-tvjrt\") pod \"keystone-db-sync-7btxx\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.925235 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8cxl\" (UniqueName: \"kubernetes.io/projected/34c21838-f8c0-4d47-8ccf-a92ff6452532-kube-api-access-w8cxl\") pod \"barbican-db-create-bj96s\" (UID: \"34c21838-f8c0-4d47-8ccf-a92ff6452532\") " pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.925444 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34c21838-f8c0-4d47-8ccf-a92ff6452532-operator-scripts\") pod \"barbican-db-create-bj96s\" (UID: \"34c21838-f8c0-4d47-8ccf-a92ff6452532\") " pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.925499 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f331e0-01bd-4693-a5fd-46739a5ddec4-operator-scripts\") pod \"barbican-d557-account-create-update-jvmth\" (UID: \"29f331e0-01bd-4693-a5fd-46739a5ddec4\") " pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.925756 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-combined-ca-bundle\") pod \"keystone-db-sync-7btxx\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.925887 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-config-data\") pod \"keystone-db-sync-7btxx\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.943345 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34c21838-f8c0-4d47-8ccf-a92ff6452532-operator-scripts\") pod \"barbican-db-create-bj96s\" (UID: \"34c21838-f8c0-4d47-8ccf-a92ff6452532\") " pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.943974 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-config-data\") pod \"keystone-db-sync-7btxx\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.954335 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-combined-ca-bundle\") pod \"keystone-db-sync-7btxx\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.963398 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8cxl\" (UniqueName: \"kubernetes.io/projected/34c21838-f8c0-4d47-8ccf-a92ff6452532-kube-api-access-w8cxl\") pod \"barbican-db-create-bj96s\" (UID: \"34c21838-f8c0-4d47-8ccf-a92ff6452532\") " pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:54 crc kubenswrapper[4874]: I0217 16:23:54.992408 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvjrt\" (UniqueName: \"kubernetes.io/projected/41f01982-4445-4662-998f-bc618d020727-kube-api-access-tvjrt\") pod \"keystone-db-sync-7btxx\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.032585 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93678eb9-19c1-490b-aa7a-d07e21f6ab56-operator-scripts\") pod \"neutron-7b55-account-create-update-cs68x\" (UID: \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\") " pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.032840 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f331e0-01bd-4693-a5fd-46739a5ddec4-operator-scripts\") pod \"barbican-d557-account-create-update-jvmth\" (UID: \"29f331e0-01bd-4693-a5fd-46739a5ddec4\") " pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.032933 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jml4c\" (UniqueName: \"kubernetes.io/projected/93678eb9-19c1-490b-aa7a-d07e21f6ab56-kube-api-access-jml4c\") pod \"neutron-7b55-account-create-update-cs68x\" (UID: \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\") " pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.033010 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88jvf\" (UniqueName: \"kubernetes.io/projected/29f331e0-01bd-4693-a5fd-46739a5ddec4-kube-api-access-88jvf\") pod \"barbican-d557-account-create-update-jvmth\" (UID: \"29f331e0-01bd-4693-a5fd-46739a5ddec4\") " pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.033910 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f331e0-01bd-4693-a5fd-46739a5ddec4-operator-scripts\") pod \"barbican-d557-account-create-update-jvmth\" (UID: \"29f331e0-01bd-4693-a5fd-46739a5ddec4\") " pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.063100 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88jvf\" (UniqueName: \"kubernetes.io/projected/29f331e0-01bd-4693-a5fd-46739a5ddec4-kube-api-access-88jvf\") pod \"barbican-d557-account-create-update-jvmth\" (UID: \"29f331e0-01bd-4693-a5fd-46739a5ddec4\") " pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.115395 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fh6cg"] Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.134777 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jml4c\" (UniqueName: \"kubernetes.io/projected/93678eb9-19c1-490b-aa7a-d07e21f6ab56-kube-api-access-jml4c\") pod \"neutron-7b55-account-create-update-cs68x\" (UID: \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\") " pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.135363 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93678eb9-19c1-490b-aa7a-d07e21f6ab56-operator-scripts\") pod \"neutron-7b55-account-create-update-cs68x\" (UID: \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\") " pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.136572 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93678eb9-19c1-490b-aa7a-d07e21f6ab56-operator-scripts\") pod \"neutron-7b55-account-create-update-cs68x\" (UID: \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\") " pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.143610 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"7a724c78253315cedbbec3ecbee885ea82b7bda66252619f6f39b63755ef68fd"} Feb 17 16:23:55 crc kubenswrapper[4874]: W0217 16:23:55.157977 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode865ad98_6d8f_4a54_9717_10028d7c52d1.slice/crio-0e1988e927394772cbda98a3aab5b9f2151c415f4e74d00ea22fd6782ff7d4cf WatchSource:0}: Error finding container 0e1988e927394772cbda98a3aab5b9f2151c415f4e74d00ea22fd6782ff7d4cf: Status 404 returned error can't find the container with id 0e1988e927394772cbda98a3aab5b9f2151c415f4e74d00ea22fd6782ff7d4cf Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.164122 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jml4c\" (UniqueName: \"kubernetes.io/projected/93678eb9-19c1-490b-aa7a-d07e21f6ab56-kube-api-access-jml4c\") pod \"neutron-7b55-account-create-update-cs68x\" (UID: \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\") " pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.231116 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7btxx" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.265598 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.304574 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.316885 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-e1c7-account-create-update-cfrvb"] Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.318590 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.477268 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-14b5-account-create-update-jtbph"] Feb 17 16:23:55 crc kubenswrapper[4874]: W0217 16:23:55.490763 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb905f7a7_368c_492c_b4ad_63bcc5cd9e0f.slice/crio-074c6050f0c1280369d3c5a7f327ed9483041f24cb8d149a905741071346a806 WatchSource:0}: Error finding container 074c6050f0c1280369d3c5a7f327ed9483041f24cb8d149a905741071346a806: Status 404 returned error can't find the container with id 074c6050f0c1280369d3c5a7f327ed9483041f24cb8d149a905741071346a806 Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.641137 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-56cvq"] Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.648314 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-r6kzp"] Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.943146 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7btxx"] Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.962112 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:55 crc kubenswrapper[4874]: I0217 16:23:55.964687 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bj96s"] Feb 17 16:23:56 crc kubenswrapper[4874]: E0217 16:23:56.003530 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode865ad98_6d8f_4a54_9717_10028d7c52d1.slice/crio-conmon-689c6f4fd9ab93adcabc60f6a2b1efa52bb20cdf1ff62d1be4b68ef6f7d1475c.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.059969 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-db-sync-config-data\") pod \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.060070 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjs4b\" (UniqueName: \"kubernetes.io/projected/f6c4fb02-268b-4640-9a46-1f107a1fcc28-kube-api-access-hjs4b\") pod \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.060211 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-combined-ca-bundle\") pod \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.060253 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-config-data\") pod \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\" (UID: \"f6c4fb02-268b-4640-9a46-1f107a1fcc28\") " Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.068484 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c4fb02-268b-4640-9a46-1f107a1fcc28-kube-api-access-hjs4b" (OuterVolumeSpecName: "kube-api-access-hjs4b") pod "f6c4fb02-268b-4640-9a46-1f107a1fcc28" (UID: "f6c4fb02-268b-4640-9a46-1f107a1fcc28"). InnerVolumeSpecName "kube-api-access-hjs4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.070371 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f6c4fb02-268b-4640-9a46-1f107a1fcc28" (UID: "f6c4fb02-268b-4640-9a46-1f107a1fcc28"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.095255 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6c4fb02-268b-4640-9a46-1f107a1fcc28" (UID: "f6c4fb02-268b-4640-9a46-1f107a1fcc28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.129551 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-config-data" (OuterVolumeSpecName: "config-data") pod "f6c4fb02-268b-4640-9a46-1f107a1fcc28" (UID: "f6c4fb02-268b-4640-9a46-1f107a1fcc28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.183276 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x2mrg" event={"ID":"f6c4fb02-268b-4640-9a46-1f107a1fcc28","Type":"ContainerDied","Data":"fcd8935ce414ea5b268ec79fd72e286fad9a602f5c7aaf1ec2030feaf69e266e"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.185112 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcd8935ce414ea5b268ec79fd72e286fad9a602f5c7aaf1ec2030feaf69e266e" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.184207 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x2mrg" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.185501 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.187700 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.189259 4874 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f6c4fb02-268b-4640-9a46-1f107a1fcc28-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.189283 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjs4b\" (UniqueName: \"kubernetes.io/projected/f6c4fb02-268b-4640-9a46-1f107a1fcc28-kube-api-access-hjs4b\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.188425 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b55-account-create-update-cs68x"] Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.189321 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bj96s" event={"ID":"34c21838-f8c0-4d47-8ccf-a92ff6452532","Type":"ContainerStarted","Data":"5665e92d0975554f58301e0e9dc63c715ba30936b01700a0105e9fa38cfe9871"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.197761 4874 generic.go:334] "Generic (PLEG): container finished" podID="e865ad98-6d8f-4a54-9717-10028d7c52d1" containerID="689c6f4fd9ab93adcabc60f6a2b1efa52bb20cdf1ff62d1be4b68ef6f7d1475c" exitCode=0 Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.197860 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fh6cg" event={"ID":"e865ad98-6d8f-4a54-9717-10028d7c52d1","Type":"ContainerDied","Data":"689c6f4fd9ab93adcabc60f6a2b1efa52bb20cdf1ff62d1be4b68ef6f7d1475c"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.197903 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fh6cg" event={"ID":"e865ad98-6d8f-4a54-9717-10028d7c52d1","Type":"ContainerStarted","Data":"0e1988e927394772cbda98a3aab5b9f2151c415f4e74d00ea22fd6782ff7d4cf"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.211891 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-r6kzp" event={"ID":"fb3d3d3a-23a3-420e-9651-edf451bc3606","Type":"ContainerStarted","Data":"691c025d1656ce48567e0847b1656a4d20a447d9cec5982f62204414ebe636e0"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.212301 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-r6kzp" event={"ID":"fb3d3d3a-23a3-420e-9651-edf451bc3606","Type":"ContainerStarted","Data":"090dd0194bdf3cc60e3a3e40f4884c8838b7de6b259e4b60227fc9e2b8931f79"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.218028 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d557-account-create-update-jvmth"] Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.245678 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-14b5-account-create-update-jtbph" event={"ID":"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f","Type":"ContainerStarted","Data":"4c2040c6b244deec2658e95f8f85e90e0344382d6b1eae43640b8938f1c5eab8"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.245730 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-14b5-account-create-update-jtbph" event={"ID":"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f","Type":"ContainerStarted","Data":"074c6050f0c1280369d3c5a7f327ed9483041f24cb8d149a905741071346a806"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.250740 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-bj96s" podStartSLOduration=2.250721901 podStartE2EDuration="2.250721901s" podCreationTimestamp="2026-02-17 16:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:56.211445642 +0000 UTC m=+1246.505834233" watchObservedRunningTime="2026-02-17 16:23:56.250721901 +0000 UTC m=+1246.545110462" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.292583 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"ab4ebdfedb74c4bacdf9ee89c8d178d481974f11886b8b53f32aed4517d586a0"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.292626 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"c533fb8a906f57935d90bc6141350542f516b55633466185e6e1224aacf52b72"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.295980 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e1c7-account-create-update-cfrvb" event={"ID":"3a9b479f-3960-4878-a2a9-48ac751b4149","Type":"ContainerStarted","Data":"f1511a4ba781ec257297c2fbeea1dd97e16fe146b82d2faff3032f5d95b52404"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.296025 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e1c7-account-create-update-cfrvb" event={"ID":"3a9b479f-3960-4878-a2a9-48ac751b4149","Type":"ContainerStarted","Data":"182bdf77dd90aaf6b2e2cc3a64cd8d839f1c24dc2570606de8840057dfac34a1"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.299312 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-r6kzp" podStartSLOduration=2.299291608 podStartE2EDuration="2.299291608s" podCreationTimestamp="2026-02-17 16:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:56.239139108 +0000 UTC m=+1246.533527679" watchObservedRunningTime="2026-02-17 16:23:56.299291608 +0000 UTC m=+1246.593680179" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.302767 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7btxx" event={"ID":"41f01982-4445-4662-998f-bc618d020727","Type":"ContainerStarted","Data":"5885e17180a576baf20733c2eb197aa8115e6848faff1ab695e614b88ac2e8af"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.317816 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-56cvq" event={"ID":"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a","Type":"ContainerStarted","Data":"17b43308311481772c20c61e83a2736d87ea46cfacdaa18ebbf58e0b5a23218e"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.317873 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-56cvq" event={"ID":"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a","Type":"ContainerStarted","Data":"ab0373bb4c03139a5fc5abce2e5f7201a5a3cd44395cbfeb7dcb8e866497c3c1"} Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.334534 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-14b5-account-create-update-jtbph" podStartSLOduration=2.334518198 podStartE2EDuration="2.334518198s" podCreationTimestamp="2026-02-17 16:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:56.277791112 +0000 UTC m=+1246.572179673" watchObservedRunningTime="2026-02-17 16:23:56.334518198 +0000 UTC m=+1246.628906759" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.389334 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-56cvq" podStartSLOduration=2.389313066 podStartE2EDuration="2.389313066s" podCreationTimestamp="2026-02-17 16:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:56.333621706 +0000 UTC m=+1246.628010267" watchObservedRunningTime="2026-02-17 16:23:56.389313066 +0000 UTC m=+1246.683701627" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.481526 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.515740 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gqtsc"] Feb 17 16:23:56 crc kubenswrapper[4874]: E0217 16:23:56.516492 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c4fb02-268b-4640-9a46-1f107a1fcc28" containerName="glance-db-sync" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.516508 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c4fb02-268b-4640-9a46-1f107a1fcc28" containerName="glance-db-sync" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.516714 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c4fb02-268b-4640-9a46-1f107a1fcc28" containerName="glance-db-sync" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.517783 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.536523 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gqtsc"] Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.702529 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.703676 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-config\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.703886 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6nlj\" (UniqueName: \"kubernetes.io/projected/07225064-be24-4f87-b130-bfdf2d08c472-kube-api-access-t6nlj\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.703968 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.704054 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-dns-svc\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.806772 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-config\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.809600 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6nlj\" (UniqueName: \"kubernetes.io/projected/07225064-be24-4f87-b130-bfdf2d08c472-kube-api-access-t6nlj\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.809756 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.809871 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-dns-svc\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.810062 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.811257 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.811352 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-dns-svc\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.811511 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.811776 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-config\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.835341 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6nlj\" (UniqueName: \"kubernetes.io/projected/07225064-be24-4f87-b130-bfdf2d08c472-kube-api-access-t6nlj\") pod \"dnsmasq-dns-74dc88fc-gqtsc\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:56 crc kubenswrapper[4874]: I0217 16:23:56.984502 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.376822 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"93aec2ebcd7a6eed25111a7bd94446e5dbe902d7069caada293d767b55761ca7"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.377130 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7fda3013-2526-48c1-ba34-9e8d1bb33e9f","Type":"ContainerStarted","Data":"cf6cb3afc78dc2e41d1738ff309da3671a8222f2cbd8122d0a2dc43f9c91d49b"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.387787 4874 generic.go:334] "Generic (PLEG): container finished" podID="93678eb9-19c1-490b-aa7a-d07e21f6ab56" containerID="61541c158ef7d75ca0933a1011194896c8d0eed21970ba7c1b398fff00006066" exitCode=0 Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.387863 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b55-account-create-update-cs68x" event={"ID":"93678eb9-19c1-490b-aa7a-d07e21f6ab56","Type":"ContainerDied","Data":"61541c158ef7d75ca0933a1011194896c8d0eed21970ba7c1b398fff00006066"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.387914 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b55-account-create-update-cs68x" event={"ID":"93678eb9-19c1-490b-aa7a-d07e21f6ab56","Type":"ContainerStarted","Data":"e841eb483f84b926760e8bdd447552eff5de2be0d342ee9f6324028bad3cd778"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.393507 4874 generic.go:334] "Generic (PLEG): container finished" podID="c3aea93a-b865-4e18-bb2e-b2dc7d6f821a" containerID="17b43308311481772c20c61e83a2736d87ea46cfacdaa18ebbf58e0b5a23218e" exitCode=0 Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.393588 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-56cvq" event={"ID":"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a","Type":"ContainerDied","Data":"17b43308311481772c20c61e83a2736d87ea46cfacdaa18ebbf58e0b5a23218e"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.397457 4874 generic.go:334] "Generic (PLEG): container finished" podID="34c21838-f8c0-4d47-8ccf-a92ff6452532" containerID="2282a4a9f89d1222ee90414d0b671bff62bbab255316d343665fdf3fb2a6a534" exitCode=0 Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.397504 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bj96s" event={"ID":"34c21838-f8c0-4d47-8ccf-a92ff6452532","Type":"ContainerDied","Data":"2282a4a9f89d1222ee90414d0b671bff62bbab255316d343665fdf3fb2a6a534"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.404753 4874 generic.go:334] "Generic (PLEG): container finished" podID="29f331e0-01bd-4693-a5fd-46739a5ddec4" containerID="b0a0156a365410cd63c6f6ea16b8379646bc3f033b4071892f494b483aa91561" exitCode=0 Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.404819 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d557-account-create-update-jvmth" event={"ID":"29f331e0-01bd-4693-a5fd-46739a5ddec4","Type":"ContainerDied","Data":"b0a0156a365410cd63c6f6ea16b8379646bc3f033b4071892f494b483aa91561"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.404846 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d557-account-create-update-jvmth" event={"ID":"29f331e0-01bd-4693-a5fd-46739a5ddec4","Type":"ContainerStarted","Data":"4a3847493b830b9c17dd2b537c4a882e8af7e1ac55ef70c055aee365b2f34511"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.406527 4874 generic.go:334] "Generic (PLEG): container finished" podID="3a9b479f-3960-4878-a2a9-48ac751b4149" containerID="f1511a4ba781ec257297c2fbeea1dd97e16fe146b82d2faff3032f5d95b52404" exitCode=0 Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.406565 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e1c7-account-create-update-cfrvb" event={"ID":"3a9b479f-3960-4878-a2a9-48ac751b4149","Type":"ContainerDied","Data":"f1511a4ba781ec257297c2fbeea1dd97e16fe146b82d2faff3032f5d95b52404"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.412971 4874 generic.go:334] "Generic (PLEG): container finished" podID="fb3d3d3a-23a3-420e-9651-edf451bc3606" containerID="691c025d1656ce48567e0847b1656a4d20a447d9cec5982f62204414ebe636e0" exitCode=0 Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.413015 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-r6kzp" event={"ID":"fb3d3d3a-23a3-420e-9651-edf451bc3606","Type":"ContainerDied","Data":"691c025d1656ce48567e0847b1656a4d20a447d9cec5982f62204414ebe636e0"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.414587 4874 generic.go:334] "Generic (PLEG): container finished" podID="b905f7a7-368c-492c-b4ad-63bcc5cd9e0f" containerID="4c2040c6b244deec2658e95f8f85e90e0344382d6b1eae43640b8938f1c5eab8" exitCode=0 Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.414739 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-14b5-account-create-update-jtbph" event={"ID":"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f","Type":"ContainerDied","Data":"4c2040c6b244deec2658e95f8f85e90e0344382d6b1eae43640b8938f1c5eab8"} Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.437874 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.685161312 podStartE2EDuration="48.437852587s" podCreationTimestamp="2026-02-17 16:23:09 +0000 UTC" firstStartedPulling="2026-02-17 16:23:46.391611692 +0000 UTC m=+1236.686000253" lastFinishedPulling="2026-02-17 16:23:53.144302967 +0000 UTC m=+1243.438691528" observedRunningTime="2026-02-17 16:23:57.423395724 +0000 UTC m=+1247.717784305" watchObservedRunningTime="2026-02-17 16:23:57.437852587 +0000 UTC m=+1247.732241148" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.587246 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gqtsc"] Feb 17 16:23:57 crc kubenswrapper[4874]: W0217 16:23:57.592365 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07225064_be24_4f87_b130_bfdf2d08c472.slice/crio-5e3a33b190558532150a6cc31a3f80f401185fe2b1984d9134d24c8a9acad262 WatchSource:0}: Error finding container 5e3a33b190558532150a6cc31a3f80f401185fe2b1984d9134d24c8a9acad262: Status 404 returned error can't find the container with id 5e3a33b190558532150a6cc31a3f80f401185fe2b1984d9134d24c8a9acad262 Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.705374 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gqtsc"] Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.728170 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.728223 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.728266 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.728962 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e095e9b56aac8ea173cabbfa2b9b7d5f89bdf527eea23b78b0ba4ca194b5eb6c"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.729014 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://e095e9b56aac8ea173cabbfa2b9b7d5f89bdf527eea23b78b0ba4ca194b5eb6c" gracePeriod=600 Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.738240 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-z9lkg"] Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.740096 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.747484 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.782883 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-z9lkg"] Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.842437 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-config\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.842482 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpd24\" (UniqueName: \"kubernetes.io/projected/a8acd227-75af-40c7-ab98-b1e5b4bcab38-kube-api-access-zpd24\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.842516 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.842542 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.842568 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.842600 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.948984 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-config\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.949049 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpd24\" (UniqueName: \"kubernetes.io/projected/a8acd227-75af-40c7-ab98-b1e5b4bcab38-kube-api-access-zpd24\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.949108 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.949147 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.949194 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.949239 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.951254 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.951371 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.952147 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-config\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.954584 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.965366 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.965653 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:57 crc kubenswrapper[4874]: I0217 16:23:57.971938 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpd24\" (UniqueName: \"kubernetes.io/projected/a8acd227-75af-40c7-ab98-b1e5b4bcab38-kube-api-access-zpd24\") pod \"dnsmasq-dns-5f59b8f679-z9lkg\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.046000 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.056910 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9b479f-3960-4878-a2a9-48ac751b4149-operator-scripts\") pod \"3a9b479f-3960-4878-a2a9-48ac751b4149\" (UID: \"3a9b479f-3960-4878-a2a9-48ac751b4149\") " Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.057063 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2swvn\" (UniqueName: \"kubernetes.io/projected/3a9b479f-3960-4878-a2a9-48ac751b4149-kube-api-access-2swvn\") pod \"3a9b479f-3960-4878-a2a9-48ac751b4149\" (UID: \"3a9b479f-3960-4878-a2a9-48ac751b4149\") " Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.060743 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a9b479f-3960-4878-a2a9-48ac751b4149-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a9b479f-3960-4878-a2a9-48ac751b4149" (UID: "3a9b479f-3960-4878-a2a9-48ac751b4149"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.064718 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9b479f-3960-4878-a2a9-48ac751b4149-kube-api-access-2swvn" (OuterVolumeSpecName: "kube-api-access-2swvn") pod "3a9b479f-3960-4878-a2a9-48ac751b4149" (UID: "3a9b479f-3960-4878-a2a9-48ac751b4149"). InnerVolumeSpecName "kube-api-access-2swvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.158885 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e865ad98-6d8f-4a54-9717-10028d7c52d1-operator-scripts\") pod \"e865ad98-6d8f-4a54-9717-10028d7c52d1\" (UID: \"e865ad98-6d8f-4a54-9717-10028d7c52d1\") " Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.158995 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhplh\" (UniqueName: \"kubernetes.io/projected/e865ad98-6d8f-4a54-9717-10028d7c52d1-kube-api-access-zhplh\") pod \"e865ad98-6d8f-4a54-9717-10028d7c52d1\" (UID: \"e865ad98-6d8f-4a54-9717-10028d7c52d1\") " Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.159521 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9b479f-3960-4878-a2a9-48ac751b4149-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.159540 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2swvn\" (UniqueName: \"kubernetes.io/projected/3a9b479f-3960-4878-a2a9-48ac751b4149-kube-api-access-2swvn\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.159772 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e865ad98-6d8f-4a54-9717-10028d7c52d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e865ad98-6d8f-4a54-9717-10028d7c52d1" (UID: "e865ad98-6d8f-4a54-9717-10028d7c52d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.163339 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e865ad98-6d8f-4a54-9717-10028d7c52d1-kube-api-access-zhplh" (OuterVolumeSpecName: "kube-api-access-zhplh") pod "e865ad98-6d8f-4a54-9717-10028d7c52d1" (UID: "e865ad98-6d8f-4a54-9717-10028d7c52d1"). InnerVolumeSpecName "kube-api-access-zhplh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.261297 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.262221 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e865ad98-6d8f-4a54-9717-10028d7c52d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.262278 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhplh\" (UniqueName: \"kubernetes.io/projected/e865ad98-6d8f-4a54-9717-10028d7c52d1-kube-api-access-zhplh\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.428368 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="e095e9b56aac8ea173cabbfa2b9b7d5f89bdf527eea23b78b0ba4ca194b5eb6c" exitCode=0 Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.428525 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"e095e9b56aac8ea173cabbfa2b9b7d5f89bdf527eea23b78b0ba4ca194b5eb6c"} Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.428712 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"1b677238bae66091e799ac761dff49995ef4eec3d7982bfb5c634aa828596a1e"} Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.428755 4874 scope.go:117] "RemoveContainer" containerID="14d9fa4df39b7f49ac05cd101e3f5c3bf6c474d638afdc060d235e7fd1103377" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.433249 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fh6cg" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.434531 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fh6cg" event={"ID":"e865ad98-6d8f-4a54-9717-10028d7c52d1","Type":"ContainerDied","Data":"0e1988e927394772cbda98a3aab5b9f2151c415f4e74d00ea22fd6782ff7d4cf"} Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.434564 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1988e927394772cbda98a3aab5b9f2151c415f4e74d00ea22fd6782ff7d4cf" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.437709 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e1c7-account-create-update-cfrvb" event={"ID":"3a9b479f-3960-4878-a2a9-48ac751b4149","Type":"ContainerDied","Data":"182bdf77dd90aaf6b2e2cc3a64cd8d839f1c24dc2570606de8840057dfac34a1"} Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.437729 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182bdf77dd90aaf6b2e2cc3a64cd8d839f1c24dc2570606de8840057dfac34a1" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.437785 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e1c7-account-create-update-cfrvb" Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.454561 4874 generic.go:334] "Generic (PLEG): container finished" podID="07225064-be24-4f87-b130-bfdf2d08c472" containerID="877888e954dc1666805b3683b2fac2b3712ef3f0452e4dcb6264dcf9a3a241ca" exitCode=0 Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.454885 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" event={"ID":"07225064-be24-4f87-b130-bfdf2d08c472","Type":"ContainerDied","Data":"877888e954dc1666805b3683b2fac2b3712ef3f0452e4dcb6264dcf9a3a241ca"} Feb 17 16:23:58 crc kubenswrapper[4874]: I0217 16:23:58.454939 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" event={"ID":"07225064-be24-4f87-b130-bfdf2d08c472","Type":"ContainerStarted","Data":"5e3a33b190558532150a6cc31a3f80f401185fe2b1984d9134d24c8a9acad262"} Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.119104 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.149051 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.184933 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8cxl\" (UniqueName: \"kubernetes.io/projected/34c21838-f8c0-4d47-8ccf-a92ff6452532-kube-api-access-w8cxl\") pod \"34c21838-f8c0-4d47-8ccf-a92ff6452532\" (UID: \"34c21838-f8c0-4d47-8ccf-a92ff6452532\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.185272 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34c21838-f8c0-4d47-8ccf-a92ff6452532-operator-scripts\") pod \"34c21838-f8c0-4d47-8ccf-a92ff6452532\" (UID: \"34c21838-f8c0-4d47-8ccf-a92ff6452532\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.187159 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c21838-f8c0-4d47-8ccf-a92ff6452532-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34c21838-f8c0-4d47-8ccf-a92ff6452532" (UID: "34c21838-f8c0-4d47-8ccf-a92ff6452532"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.192972 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c21838-f8c0-4d47-8ccf-a92ff6452532-kube-api-access-w8cxl" (OuterVolumeSpecName: "kube-api-access-w8cxl") pod "34c21838-f8c0-4d47-8ccf-a92ff6452532" (UID: "34c21838-f8c0-4d47-8ccf-a92ff6452532"). InnerVolumeSpecName "kube-api-access-w8cxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.290786 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6nlj\" (UniqueName: \"kubernetes.io/projected/07225064-be24-4f87-b130-bfdf2d08c472-kube-api-access-t6nlj\") pod \"07225064-be24-4f87-b130-bfdf2d08c472\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.290841 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-dns-svc\") pod \"07225064-be24-4f87-b130-bfdf2d08c472\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.290882 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-config\") pod \"07225064-be24-4f87-b130-bfdf2d08c472\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.290968 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-nb\") pod \"07225064-be24-4f87-b130-bfdf2d08c472\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.291031 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-sb\") pod \"07225064-be24-4f87-b130-bfdf2d08c472\" (UID: \"07225064-be24-4f87-b130-bfdf2d08c472\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.291516 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8cxl\" (UniqueName: \"kubernetes.io/projected/34c21838-f8c0-4d47-8ccf-a92ff6452532-kube-api-access-w8cxl\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.291528 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34c21838-f8c0-4d47-8ccf-a92ff6452532-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.318536 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07225064-be24-4f87-b130-bfdf2d08c472-kube-api-access-t6nlj" (OuterVolumeSpecName: "kube-api-access-t6nlj") pod "07225064-be24-4f87-b130-bfdf2d08c472" (UID: "07225064-be24-4f87-b130-bfdf2d08c472"). InnerVolumeSpecName "kube-api-access-t6nlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.363920 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.379588 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.396219 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6nlj\" (UniqueName: \"kubernetes.io/projected/07225064-be24-4f87-b130-bfdf2d08c472-kube-api-access-t6nlj\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.404478 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-config" (OuterVolumeSpecName: "config") pod "07225064-be24-4f87-b130-bfdf2d08c472" (UID: "07225064-be24-4f87-b130-bfdf2d08c472"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.412825 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "07225064-be24-4f87-b130-bfdf2d08c472" (UID: "07225064-be24-4f87-b130-bfdf2d08c472"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.424581 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "07225064-be24-4f87-b130-bfdf2d08c472" (UID: "07225064-be24-4f87-b130-bfdf2d08c472"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.431341 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07225064-be24-4f87-b130-bfdf2d08c472" (UID: "07225064-be24-4f87-b130-bfdf2d08c472"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.481863 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.491433 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-56cvq" event={"ID":"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a","Type":"ContainerDied","Data":"ab0373bb4c03139a5fc5abce2e5f7201a5a3cd44395cbfeb7dcb8e866497c3c1"} Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.491464 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab0373bb4c03139a5fc5abce2e5f7201a5a3cd44395cbfeb7dcb8e866497c3c1" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.491525 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-56cvq" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.494383 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.496513 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bj96s" event={"ID":"34c21838-f8c0-4d47-8ccf-a92ff6452532","Type":"ContainerDied","Data":"5665e92d0975554f58301e0e9dc63c715ba30936b01700a0105e9fa38cfe9871"} Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.496545 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5665e92d0975554f58301e0e9dc63c715ba30936b01700a0105e9fa38cfe9871" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.496594 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bj96s" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.500258 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xhbg\" (UniqueName: \"kubernetes.io/projected/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-kube-api-access-8xhbg\") pod \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\" (UID: \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.500527 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-operator-scripts\") pod \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\" (UID: \"c3aea93a-b865-4e18-bb2e-b2dc7d6f821a\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.500565 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3d3d3a-23a3-420e-9651-edf451bc3606-operator-scripts\") pod \"fb3d3d3a-23a3-420e-9651-edf451bc3606\" (UID: \"fb3d3d3a-23a3-420e-9651-edf451bc3606\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.500627 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9bl9\" (UniqueName: \"kubernetes.io/projected/fb3d3d3a-23a3-420e-9651-edf451bc3606-kube-api-access-q9bl9\") pod \"fb3d3d3a-23a3-420e-9651-edf451bc3606\" (UID: \"fb3d3d3a-23a3-420e-9651-edf451bc3606\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.501097 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.501111 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.501121 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.501132 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07225064-be24-4f87-b130-bfdf2d08c472-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.501383 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3aea93a-b865-4e18-bb2e-b2dc7d6f821a" (UID: "c3aea93a-b865-4e18-bb2e-b2dc7d6f821a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.501580 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb3d3d3a-23a3-420e-9651-edf451bc3606-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb3d3d3a-23a3-420e-9651-edf451bc3606" (UID: "fb3d3d3a-23a3-420e-9651-edf451bc3606"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.506096 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-kube-api-access-8xhbg" (OuterVolumeSpecName: "kube-api-access-8xhbg") pod "c3aea93a-b865-4e18-bb2e-b2dc7d6f821a" (UID: "c3aea93a-b865-4e18-bb2e-b2dc7d6f821a"). InnerVolumeSpecName "kube-api-access-8xhbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.507950 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb3d3d3a-23a3-420e-9651-edf451bc3606-kube-api-access-q9bl9" (OuterVolumeSpecName: "kube-api-access-q9bl9") pod "fb3d3d3a-23a3-420e-9651-edf451bc3606" (UID: "fb3d3d3a-23a3-420e-9651-edf451bc3606"). InnerVolumeSpecName "kube-api-access-q9bl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.509609 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-r6kzp" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.509619 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-r6kzp" event={"ID":"fb3d3d3a-23a3-420e-9651-edf451bc3606","Type":"ContainerDied","Data":"090dd0194bdf3cc60e3a3e40f4884c8838b7de6b259e4b60227fc9e2b8931f79"} Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.509809 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="090dd0194bdf3cc60e3a3e40f4884c8838b7de6b259e4b60227fc9e2b8931f79" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.510739 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.515779 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" event={"ID":"07225064-be24-4f87-b130-bfdf2d08c472","Type":"ContainerDied","Data":"5e3a33b190558532150a6cc31a3f80f401185fe2b1984d9134d24c8a9acad262"} Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.515819 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-gqtsc" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.515831 4874 scope.go:117] "RemoveContainer" containerID="877888e954dc1666805b3683b2fac2b3712ef3f0452e4dcb6264dcf9a3a241ca" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.524046 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-14b5-account-create-update-jtbph" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.525408 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-14b5-account-create-update-jtbph" event={"ID":"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f","Type":"ContainerDied","Data":"074c6050f0c1280369d3c5a7f327ed9483041f24cb8d149a905741071346a806"} Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.531085 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="074c6050f0c1280369d3c5a7f327ed9483041f24cb8d149a905741071346a806" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.531721 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b55-account-create-update-cs68x" event={"ID":"93678eb9-19c1-490b-aa7a-d07e21f6ab56","Type":"ContainerDied","Data":"e841eb483f84b926760e8bdd447552eff5de2be0d342ee9f6324028bad3cd778"} Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.531755 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e841eb483f84b926760e8bdd447552eff5de2be0d342ee9f6324028bad3cd778" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.531805 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b55-account-create-update-cs68x" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.603196 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93678eb9-19c1-490b-aa7a-d07e21f6ab56-operator-scripts\") pod \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\" (UID: \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.603407 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88jvf\" (UniqueName: \"kubernetes.io/projected/29f331e0-01bd-4693-a5fd-46739a5ddec4-kube-api-access-88jvf\") pod \"29f331e0-01bd-4693-a5fd-46739a5ddec4\" (UID: \"29f331e0-01bd-4693-a5fd-46739a5ddec4\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.603549 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-operator-scripts\") pod \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\" (UID: \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.603673 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jml4c\" (UniqueName: \"kubernetes.io/projected/93678eb9-19c1-490b-aa7a-d07e21f6ab56-kube-api-access-jml4c\") pod \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\" (UID: \"93678eb9-19c1-490b-aa7a-d07e21f6ab56\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.603742 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93678eb9-19c1-490b-aa7a-d07e21f6ab56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93678eb9-19c1-490b-aa7a-d07e21f6ab56" (UID: "93678eb9-19c1-490b-aa7a-d07e21f6ab56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.603833 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njb69\" (UniqueName: \"kubernetes.io/projected/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-kube-api-access-njb69\") pod \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\" (UID: \"b905f7a7-368c-492c-b4ad-63bcc5cd9e0f\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.603958 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f331e0-01bd-4693-a5fd-46739a5ddec4-operator-scripts\") pod \"29f331e0-01bd-4693-a5fd-46739a5ddec4\" (UID: \"29f331e0-01bd-4693-a5fd-46739a5ddec4\") " Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.604193 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b905f7a7-368c-492c-b4ad-63bcc5cd9e0f" (UID: "b905f7a7-368c-492c-b4ad-63bcc5cd9e0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.606620 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29f331e0-01bd-4693-a5fd-46739a5ddec4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29f331e0-01bd-4693-a5fd-46739a5ddec4" (UID: "29f331e0-01bd-4693-a5fd-46739a5ddec4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.609470 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f331e0-01bd-4693-a5fd-46739a5ddec4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.609501 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xhbg\" (UniqueName: \"kubernetes.io/projected/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-kube-api-access-8xhbg\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.609513 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93678eb9-19c1-490b-aa7a-d07e21f6ab56-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.609524 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.609533 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.609542 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb3d3d3a-23a3-420e-9651-edf451bc3606-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.609551 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9bl9\" (UniqueName: \"kubernetes.io/projected/fb3d3d3a-23a3-420e-9651-edf451bc3606-kube-api-access-q9bl9\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.612148 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f331e0-01bd-4693-a5fd-46739a5ddec4-kube-api-access-88jvf" (OuterVolumeSpecName: "kube-api-access-88jvf") pod "29f331e0-01bd-4693-a5fd-46739a5ddec4" (UID: "29f331e0-01bd-4693-a5fd-46739a5ddec4"). InnerVolumeSpecName "kube-api-access-88jvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.612588 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93678eb9-19c1-490b-aa7a-d07e21f6ab56-kube-api-access-jml4c" (OuterVolumeSpecName: "kube-api-access-jml4c") pod "93678eb9-19c1-490b-aa7a-d07e21f6ab56" (UID: "93678eb9-19c1-490b-aa7a-d07e21f6ab56"). InnerVolumeSpecName "kube-api-access-jml4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.616324 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-kube-api-access-njb69" (OuterVolumeSpecName: "kube-api-access-njb69") pod "b905f7a7-368c-492c-b4ad-63bcc5cd9e0f" (UID: "b905f7a7-368c-492c-b4ad-63bcc5cd9e0f"). InnerVolumeSpecName "kube-api-access-njb69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.681118 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-z9lkg"] Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.698066 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gqtsc"] Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.713946 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gqtsc"] Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.714016 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jml4c\" (UniqueName: \"kubernetes.io/projected/93678eb9-19c1-490b-aa7a-d07e21f6ab56-kube-api-access-jml4c\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.716591 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njb69\" (UniqueName: \"kubernetes.io/projected/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f-kube-api-access-njb69\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4874]: I0217 16:23:59.716607 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88jvf\" (UniqueName: \"kubernetes.io/projected/29f331e0-01bd-4693-a5fd-46739a5ddec4-kube-api-access-88jvf\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:00 crc kubenswrapper[4874]: I0217 16:24:00.472994 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07225064-be24-4f87-b130-bfdf2d08c472" path="/var/lib/kubelet/pods/07225064-be24-4f87-b130-bfdf2d08c472/volumes" Feb 17 16:24:00 crc kubenswrapper[4874]: I0217 16:24:00.554597 4874 generic.go:334] "Generic (PLEG): container finished" podID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerID="b60ff46a8dc99b2af9dbb03f7c00addbc13ef5decd8fc0dbe2b9c0cd2bec5cd4" exitCode=0 Feb 17 16:24:00 crc kubenswrapper[4874]: I0217 16:24:00.554609 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" event={"ID":"a8acd227-75af-40c7-ab98-b1e5b4bcab38","Type":"ContainerDied","Data":"b60ff46a8dc99b2af9dbb03f7c00addbc13ef5decd8fc0dbe2b9c0cd2bec5cd4"} Feb 17 16:24:00 crc kubenswrapper[4874]: I0217 16:24:00.554946 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" event={"ID":"a8acd227-75af-40c7-ab98-b1e5b4bcab38","Type":"ContainerStarted","Data":"8965461e20ae4d688bf10a4a62b9d08ade35e9f329f6bb68229fad27b0661643"} Feb 17 16:24:00 crc kubenswrapper[4874]: I0217 16:24:00.560432 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d557-account-create-update-jvmth" Feb 17 16:24:00 crc kubenswrapper[4874]: I0217 16:24:00.560891 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d557-account-create-update-jvmth" event={"ID":"29f331e0-01bd-4693-a5fd-46739a5ddec4","Type":"ContainerDied","Data":"4a3847493b830b9c17dd2b537c4a882e8af7e1ac55ef70c055aee365b2f34511"} Feb 17 16:24:00 crc kubenswrapper[4874]: I0217 16:24:00.560954 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3847493b830b9c17dd2b537c4a882e8af7e1ac55ef70c055aee365b2f34511" Feb 17 16:24:03 crc kubenswrapper[4874]: I0217 16:24:03.601358 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7btxx" event={"ID":"41f01982-4445-4662-998f-bc618d020727","Type":"ContainerStarted","Data":"898f1f2c7242a9af21f78cfd0468ecf2ce6fe1b41559458f2fc9ef20b03e288e"} Feb 17 16:24:03 crc kubenswrapper[4874]: I0217 16:24:03.605162 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:24:03 crc kubenswrapper[4874]: I0217 16:24:03.605187 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" event={"ID":"a8acd227-75af-40c7-ab98-b1e5b4bcab38","Type":"ContainerStarted","Data":"948fd4a55c7ab06ef10bbf950ec52f4c6f033dd7de796b988ad2bfab487415ca"} Feb 17 16:24:03 crc kubenswrapper[4874]: I0217 16:24:03.644353 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-7btxx" podStartSLOduration=3.008003318 podStartE2EDuration="9.64433123s" podCreationTimestamp="2026-02-17 16:23:54 +0000 UTC" firstStartedPulling="2026-02-17 16:23:55.960192526 +0000 UTC m=+1246.254581087" lastFinishedPulling="2026-02-17 16:24:02.596520438 +0000 UTC m=+1252.890908999" observedRunningTime="2026-02-17 16:24:03.633578538 +0000 UTC m=+1253.927967089" watchObservedRunningTime="2026-02-17 16:24:03.64433123 +0000 UTC m=+1253.938719791" Feb 17 16:24:03 crc kubenswrapper[4874]: I0217 16:24:03.666033 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" podStartSLOduration=6.6660074 podStartE2EDuration="6.6660074s" podCreationTimestamp="2026-02-17 16:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:03.662425702 +0000 UTC m=+1253.956814263" watchObservedRunningTime="2026-02-17 16:24:03.6660074 +0000 UTC m=+1253.960395971" Feb 17 16:24:06 crc kubenswrapper[4874]: I0217 16:24:06.471756 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:24:06 crc kubenswrapper[4874]: I0217 16:24:06.475464 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:24:06 crc kubenswrapper[4874]: I0217 16:24:06.641699 4874 generic.go:334] "Generic (PLEG): container finished" podID="41f01982-4445-4662-998f-bc618d020727" containerID="898f1f2c7242a9af21f78cfd0468ecf2ce6fe1b41559458f2fc9ef20b03e288e" exitCode=0 Feb 17 16:24:06 crc kubenswrapper[4874]: I0217 16:24:06.641800 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7btxx" event={"ID":"41f01982-4445-4662-998f-bc618d020727","Type":"ContainerDied","Data":"898f1f2c7242a9af21f78cfd0468ecf2ce6fe1b41559458f2fc9ef20b03e288e"} Feb 17 16:24:06 crc kubenswrapper[4874]: I0217 16:24:06.649247 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.062970 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7btxx" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.116093 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-config-data\") pod \"41f01982-4445-4662-998f-bc618d020727\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.116204 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvjrt\" (UniqueName: \"kubernetes.io/projected/41f01982-4445-4662-998f-bc618d020727-kube-api-access-tvjrt\") pod \"41f01982-4445-4662-998f-bc618d020727\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.116329 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-combined-ca-bundle\") pod \"41f01982-4445-4662-998f-bc618d020727\" (UID: \"41f01982-4445-4662-998f-bc618d020727\") " Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.122401 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f01982-4445-4662-998f-bc618d020727-kube-api-access-tvjrt" (OuterVolumeSpecName: "kube-api-access-tvjrt") pod "41f01982-4445-4662-998f-bc618d020727" (UID: "41f01982-4445-4662-998f-bc618d020727"). InnerVolumeSpecName "kube-api-access-tvjrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.149337 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41f01982-4445-4662-998f-bc618d020727" (UID: "41f01982-4445-4662-998f-bc618d020727"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.175336 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-config-data" (OuterVolumeSpecName: "config-data") pod "41f01982-4445-4662-998f-bc618d020727" (UID: "41f01982-4445-4662-998f-bc618d020727"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.217573 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.217604 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41f01982-4445-4662-998f-bc618d020727-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.217636 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvjrt\" (UniqueName: \"kubernetes.io/projected/41f01982-4445-4662-998f-bc618d020727-kube-api-access-tvjrt\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.263099 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.319745 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-spnx4"] Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.320217 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" podUID="3c54a6b1-bb00-46fc-91bf-d0c312daceb6" containerName="dnsmasq-dns" containerID="cri-o://c07f0c8d037cc6f7f2614b10e937e19f4fd88860801c82fa7eeefb0f40841360" gracePeriod=10 Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.666525 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7btxx" event={"ID":"41f01982-4445-4662-998f-bc618d020727","Type":"ContainerDied","Data":"5885e17180a576baf20733c2eb197aa8115e6848faff1ab695e614b88ac2e8af"} Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.666574 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5885e17180a576baf20733c2eb197aa8115e6848faff1ab695e614b88ac2e8af" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.666643 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7btxx" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.669753 4874 generic.go:334] "Generic (PLEG): container finished" podID="3c54a6b1-bb00-46fc-91bf-d0c312daceb6" containerID="c07f0c8d037cc6f7f2614b10e937e19f4fd88860801c82fa7eeefb0f40841360" exitCode=0 Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.669793 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" event={"ID":"3c54a6b1-bb00-46fc-91bf-d0c312daceb6","Type":"ContainerDied","Data":"c07f0c8d037cc6f7f2614b10e937e19f4fd88860801c82fa7eeefb0f40841360"} Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.746723 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.926210 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7s4g2"] Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927128 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f01982-4445-4662-998f-bc618d020727" containerName="keystone-db-sync" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927153 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f01982-4445-4662-998f-bc618d020727" containerName="keystone-db-sync" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927182 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c54a6b1-bb00-46fc-91bf-d0c312daceb6" containerName="init" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927191 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c54a6b1-bb00-46fc-91bf-d0c312daceb6" containerName="init" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927208 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9b479f-3960-4878-a2a9-48ac751b4149" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927217 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9b479f-3960-4878-a2a9-48ac751b4149" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927240 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e865ad98-6d8f-4a54-9717-10028d7c52d1" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927258 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e865ad98-6d8f-4a54-9717-10028d7c52d1" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927274 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb3d3d3a-23a3-420e-9651-edf451bc3606" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927283 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3d3d3a-23a3-420e-9651-edf451bc3606" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927300 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07225064-be24-4f87-b130-bfdf2d08c472" containerName="init" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927307 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="07225064-be24-4f87-b130-bfdf2d08c472" containerName="init" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927319 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93678eb9-19c1-490b-aa7a-d07e21f6ab56" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927329 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="93678eb9-19c1-490b-aa7a-d07e21f6ab56" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927344 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c54a6b1-bb00-46fc-91bf-d0c312daceb6" containerName="dnsmasq-dns" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927351 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c54a6b1-bb00-46fc-91bf-d0c312daceb6" containerName="dnsmasq-dns" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927366 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c21838-f8c0-4d47-8ccf-a92ff6452532" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927399 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c21838-f8c0-4d47-8ccf-a92ff6452532" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927418 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3aea93a-b865-4e18-bb2e-b2dc7d6f821a" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927425 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3aea93a-b865-4e18-bb2e-b2dc7d6f821a" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927439 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b905f7a7-368c-492c-b4ad-63bcc5cd9e0f" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927448 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b905f7a7-368c-492c-b4ad-63bcc5cd9e0f" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: E0217 16:24:08.927471 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f331e0-01bd-4693-a5fd-46739a5ddec4" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927479 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f331e0-01bd-4693-a5fd-46739a5ddec4" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927714 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c21838-f8c0-4d47-8ccf-a92ff6452532" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927736 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e865ad98-6d8f-4a54-9717-10028d7c52d1" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927747 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="93678eb9-19c1-490b-aa7a-d07e21f6ab56" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927765 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c54a6b1-bb00-46fc-91bf-d0c312daceb6" containerName="dnsmasq-dns" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927774 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f331e0-01bd-4693-a5fd-46739a5ddec4" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927787 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="07225064-be24-4f87-b130-bfdf2d08c472" containerName="init" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927797 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb3d3d3a-23a3-420e-9651-edf451bc3606" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927809 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a9b479f-3960-4878-a2a9-48ac751b4149" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927828 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3aea93a-b865-4e18-bb2e-b2dc7d6f821a" containerName="mariadb-database-create" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927838 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b905f7a7-368c-492c-b4ad-63bcc5cd9e0f" containerName="mariadb-account-create-update" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.927852 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f01982-4445-4662-998f-bc618d020727" containerName="keystone-db-sync" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.931417 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.956659 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-sb\") pod \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.956737 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-nb\") pod \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.956773 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-config\") pod \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.956916 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwmgq\" (UniqueName: \"kubernetes.io/projected/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-kube-api-access-kwmgq\") pod \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.956959 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-dns-svc\") pod \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\" (UID: \"3c54a6b1-bb00-46fc-91bf-d0c312daceb6\") " Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.971007 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7s4g2"] Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.982170 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-kube-api-access-kwmgq" (OuterVolumeSpecName: "kube-api-access-kwmgq") pod "3c54a6b1-bb00-46fc-91bf-d0c312daceb6" (UID: "3c54a6b1-bb00-46fc-91bf-d0c312daceb6"). InnerVolumeSpecName "kube-api-access-kwmgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.988423 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5tbz9"] Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.990549 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.994752 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:24:08 crc kubenswrapper[4874]: I0217 16:24:08.994953 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.002312 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7tb22" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.002805 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.002910 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.015039 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5tbz9"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.058825 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-config\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.058883 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.058944 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.058995 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.059032 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plkxk\" (UniqueName: \"kubernetes.io/projected/877610c6-8111-4aff-b0ad-d699834accca-kube-api-access-plkxk\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.059055 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.059690 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwmgq\" (UniqueName: \"kubernetes.io/projected/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-kube-api-access-kwmgq\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.083169 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-k5j4f"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.084914 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.090447 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c54a6b1-bb00-46fc-91bf-d0c312daceb6" (UID: "3c54a6b1-bb00-46fc-91bf-d0c312daceb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.094025 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.094251 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-qsm84" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.103438 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-k5j4f"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.129411 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-config" (OuterVolumeSpecName: "config") pod "3c54a6b1-bb00-46fc-91bf-d0c312daceb6" (UID: "3c54a6b1-bb00-46fc-91bf-d0c312daceb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163303 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-config\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163351 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64jv\" (UniqueName: \"kubernetes.io/projected/dbb238f4-41a6-4299-8d81-887a0957e5d2-kube-api-access-r64jv\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163378 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-config-data\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163401 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163432 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-fernet-keys\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163476 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-credential-keys\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163494 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163514 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-scripts\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163542 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163573 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plkxk\" (UniqueName: \"kubernetes.io/projected/877610c6-8111-4aff-b0ad-d699834accca-kube-api-access-plkxk\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163594 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163640 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-combined-ca-bundle\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163688 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.163701 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.164510 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-config\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.165052 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.165331 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.167064 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.171328 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c54a6b1-bb00-46fc-91bf-d0c312daceb6" (UID: "3c54a6b1-bb00-46fc-91bf-d0c312daceb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.185262 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.191348 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c54a6b1-bb00-46fc-91bf-d0c312daceb6" (UID: "3c54a6b1-bb00-46fc-91bf-d0c312daceb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.199386 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plkxk\" (UniqueName: \"kubernetes.io/projected/877610c6-8111-4aff-b0ad-d699834accca-kube-api-access-plkxk\") pod \"dnsmasq-dns-bbf5cc879-7s4g2\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.260445 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-jrg8w"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.262264 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.284450 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mz588" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285032 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285425 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9l9c\" (UniqueName: \"kubernetes.io/projected/96118c9a-6b15-48a8-b6d9-a2146dc0182c-kube-api-access-f9l9c\") pod \"heat-db-sync-k5j4f\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285484 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r64jv\" (UniqueName: \"kubernetes.io/projected/dbb238f4-41a6-4299-8d81-887a0957e5d2-kube-api-access-r64jv\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285522 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-config-data\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285557 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-fernet-keys\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285600 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-credential-keys\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285624 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-scripts\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285694 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-combined-ca-bundle\") pod \"heat-db-sync-k5j4f\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285711 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-config-data\") pod \"heat-db-sync-k5j4f\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285736 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-combined-ca-bundle\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285795 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285805 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c54a6b1-bb00-46fc-91bf-d0c312daceb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.285436 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.289314 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jrg8w"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.316611 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-credential-keys\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.316828 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-scripts\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.318275 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-combined-ca-bundle\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.318841 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-config-data\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.333794 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-fernet-keys\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.334618 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r64jv\" (UniqueName: \"kubernetes.io/projected/dbb238f4-41a6-4299-8d81-887a0957e5d2-kube-api-access-r64jv\") pod \"keystone-bootstrap-5tbz9\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.389012 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10d748cd-cbae-4113-bfed-39c4511a879f-etc-machine-id\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.389295 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-combined-ca-bundle\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.389405 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-scripts\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.389513 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-combined-ca-bundle\") pod \"heat-db-sync-k5j4f\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.389587 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-config-data\") pod \"heat-db-sync-k5j4f\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.389771 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9l9c\" (UniqueName: \"kubernetes.io/projected/96118c9a-6b15-48a8-b6d9-a2146dc0182c-kube-api-access-f9l9c\") pod \"heat-db-sync-k5j4f\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.392399 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf2d9\" (UniqueName: \"kubernetes.io/projected/10d748cd-cbae-4113-bfed-39c4511a879f-kube-api-access-nf2d9\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.392666 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-db-sync-config-data\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.392791 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-config-data\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.410046 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dhtc8"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.412187 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.412918 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-combined-ca-bundle\") pod \"heat-db-sync-k5j4f\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.416772 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-config-data\") pod \"heat-db-sync-k5j4f\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.419748 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nnqww" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.420038 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.434408 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7s4g2"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.435218 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.445141 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dhtc8"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.457138 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9l9c\" (UniqueName: \"kubernetes.io/projected/96118c9a-6b15-48a8-b6d9-a2146dc0182c-kube-api-access-f9l9c\") pod \"heat-db-sync-k5j4f\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.457375 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-pfkph"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.466978 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lw7kx"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.467995 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.478984 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.490306 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pfkph"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.496296 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.496551 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.496656 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.497062 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lln7c" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.497188 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.497351 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cmjcc" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.500635 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-config-data\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.500924 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-combined-ca-bundle\") pod \"barbican-db-sync-dhtc8\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.501256 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10d748cd-cbae-4113-bfed-39c4511a879f-etc-machine-id\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.501343 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-combined-ca-bundle\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.501587 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-scripts\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.501788 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-db-sync-config-data\") pod \"barbican-db-sync-dhtc8\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.502949 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf2d9\" (UniqueName: \"kubernetes.io/projected/10d748cd-cbae-4113-bfed-39c4511a879f-kube-api-access-nf2d9\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.503132 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzsz\" (UniqueName: \"kubernetes.io/projected/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-kube-api-access-nzzsz\") pod \"barbican-db-sync-dhtc8\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.503250 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-db-sync-config-data\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.501820 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lw7kx"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.501885 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10d748cd-cbae-4113-bfed-39c4511a879f-etc-machine-id\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.524676 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-scripts\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.525021 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-combined-ca-bundle\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.528685 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-db-sync-config-data\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.532291 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.533282 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-config-data\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.548631 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k5j4f" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.557681 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf2d9\" (UniqueName: \"kubernetes.io/projected/10d748cd-cbae-4113-bfed-39c4511a879f-kube-api-access-nf2d9\") pod \"cinder-db-sync-jrg8w\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.557786 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wdr7t"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.559717 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.585019 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wdr7t"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606320 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-config-data\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606369 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-combined-ca-bundle\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606401 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf0f49a-5960-41a8-b699-8fb05241ee31-logs\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606443 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzzsz\" (UniqueName: \"kubernetes.io/projected/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-kube-api-access-nzzsz\") pod \"barbican-db-sync-dhtc8\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606463 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzbv\" (UniqueName: \"kubernetes.io/projected/46bb9425-0e75-4b58-b0f7-f7ad6998255b-kube-api-access-rjzbv\") pod \"neutron-db-sync-pfkph\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606516 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-combined-ca-bundle\") pod \"barbican-db-sync-dhtc8\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606561 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-combined-ca-bundle\") pod \"neutron-db-sync-pfkph\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606583 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-config\") pod \"neutron-db-sync-pfkph\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606597 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt647\" (UniqueName: \"kubernetes.io/projected/dcf0f49a-5960-41a8-b699-8fb05241ee31-kube-api-access-nt647\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606620 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-scripts\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.606719 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-db-sync-config-data\") pod \"barbican-db-sync-dhtc8\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.624977 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.628191 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-db-sync-config-data\") pod \"barbican-db-sync-dhtc8\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.633814 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-combined-ca-bundle\") pod \"barbican-db-sync-dhtc8\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.666753 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzzsz\" (UniqueName: \"kubernetes.io/projected/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-kube-api-access-nzzsz\") pod \"barbican-db-sync-dhtc8\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.716605 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-combined-ca-bundle\") pod \"neutron-db-sync-pfkph\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.717388 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-config\") pod \"neutron-db-sync-pfkph\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.719041 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt647\" (UniqueName: \"kubernetes.io/projected/dcf0f49a-5960-41a8-b699-8fb05241ee31-kube-api-access-nt647\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.719198 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-scripts\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.720326 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.720474 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.720572 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-config-data\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.720658 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-combined-ca-bundle\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.720770 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf0f49a-5960-41a8-b699-8fb05241ee31-logs\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.721095 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-config\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.722003 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94p6b\" (UniqueName: \"kubernetes.io/projected/865b8bd3-b179-4e75-a32e-0df273eac5e4-kube-api-access-94p6b\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.724395 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.724575 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.724895 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjzbv\" (UniqueName: \"kubernetes.io/projected/46bb9425-0e75-4b58-b0f7-f7ad6998255b-kube-api-access-rjzbv\") pod \"neutron-db-sync-pfkph\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.726765 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-scripts\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.728777 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-config\") pod \"neutron-db-sync-pfkph\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.729982 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-combined-ca-bundle\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.730767 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-config-data\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.738765 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-combined-ca-bundle\") pod \"neutron-db-sync-pfkph\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.739097 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf0f49a-5960-41a8-b699-8fb05241ee31-logs\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.750751 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt647\" (UniqueName: \"kubernetes.io/projected/dcf0f49a-5960-41a8-b699-8fb05241ee31-kube-api-access-nt647\") pod \"placement-db-sync-lw7kx\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.752022 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" event={"ID":"3c54a6b1-bb00-46fc-91bf-d0c312daceb6","Type":"ContainerDied","Data":"f0736dfd253dda25c71401e065a1e00eabca45b2ccb2a5fe46699a62b3f6b256"} Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.752101 4874 scope.go:117] "RemoveContainer" containerID="c07f0c8d037cc6f7f2614b10e937e19f4fd88860801c82fa7eeefb0f40841360" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.752460 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-spnx4" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.758438 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjzbv\" (UniqueName: \"kubernetes.io/projected/46bb9425-0e75-4b58-b0f7-f7ad6998255b-kube-api-access-rjzbv\") pod \"neutron-db-sync-pfkph\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.761629 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.773622 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.779718 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.784292 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.784431 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.826892 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.826977 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-config\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.826995 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94p6b\" (UniqueName: \"kubernetes.io/projected/865b8bd3-b179-4e75-a32e-0df273eac5e4-kube-api-access-94p6b\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.827018 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.828064 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.828203 4874 scope.go:117] "RemoveContainer" containerID="bbdebe81d80dc65707b1e8398fb957fd6acb87e535a12101f6283d6d4013bd1c" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.828346 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.828429 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-config\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.828756 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.830860 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.832730 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.838194 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.838368 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.850095 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.854901 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94p6b\" (UniqueName: \"kubernetes.io/projected/865b8bd3-b179-4e75-a32e-0df273eac5e4-kube-api-access-94p6b\") pod \"dnsmasq-dns-56df8fb6b7-wdr7t\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.867966 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-spnx4"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.887446 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-spnx4"] Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.903181 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.929970 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgg4m\" (UniqueName: \"kubernetes.io/projected/0df3ad69-92a9-4a61-9178-619f75dc6f98-kube-api-access-lgg4m\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.930035 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-run-httpd\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.930173 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-log-httpd\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.930243 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-scripts\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.930284 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.930302 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.930350 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-config-data\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:09 crc kubenswrapper[4874]: I0217 16:24:09.957013 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.033323 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-run-httpd\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.033405 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-log-httpd\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.033469 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-scripts\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.033509 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.033523 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.033567 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-config-data\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.033609 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgg4m\" (UniqueName: \"kubernetes.io/projected/0df3ad69-92a9-4a61-9178-619f75dc6f98-kube-api-access-lgg4m\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.034613 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-run-httpd\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.034814 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-log-httpd\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.041285 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.041921 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-scripts\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.051845 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-config-data\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.059942 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.064393 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgg4m\" (UniqueName: \"kubernetes.io/projected/0df3ad69-92a9-4a61-9178-619f75dc6f98-kube-api-access-lgg4m\") pod \"ceilometer-0\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.138628 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.139398 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.148727 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.160434 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.160635 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.160755 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8j7k" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.160858 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.208516 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.224056 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.226673 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.229009 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.234918 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.237706 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.237745 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.237789 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.237825 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-logs\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.237845 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-config-data\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.237867 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvgj4\" (UniqueName: \"kubernetes.io/projected/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-kube-api-access-pvgj4\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.237893 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.237970 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-scripts\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.249906 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.288106 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7s4g2"] Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.344727 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.344818 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.344943 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.346554 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-logs\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.346629 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-config-data\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.346678 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvgj4\" (UniqueName: \"kubernetes.io/projected/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-kube-api-access-pvgj4\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.346725 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.346916 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-scripts\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.348189 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.350544 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.351405 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-config-data\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.351856 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-scripts\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.354312 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.354350 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-logs\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.361659 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.361689 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b268ef66cc41404bbecd9c0f528b347f586997c429bddc87782c10962eb32faa/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.368782 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvgj4\" (UniqueName: \"kubernetes.io/projected/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-kube-api-access-pvgj4\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.426766 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5tbz9"] Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.446499 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.450050 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.453826 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.454045 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.454153 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2sg6\" (UniqueName: \"kubernetes.io/projected/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-kube-api-access-p2sg6\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.454238 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-logs\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.454389 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.454577 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.454743 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.454852 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.540778 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c54a6b1-bb00-46fc-91bf-d0c312daceb6" path="/var/lib/kubelet/pods/3c54a6b1-bb00-46fc-91bf-d0c312daceb6/volumes" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.564468 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2sg6\" (UniqueName: \"kubernetes.io/projected/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-kube-api-access-p2sg6\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.564518 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-logs\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.564575 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.564680 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.564759 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.564799 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.564884 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.564947 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.572102 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.572270 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.572706 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.572730 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1d818c0b3c780ebc1f2ad700eba392c18b331a7a76e8a2b3fde68119e55723e/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.580963 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-logs\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.582607 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.591182 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.595774 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.601181 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.604153 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.605980 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8j7k" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.618257 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.620498 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2sg6\" (UniqueName: \"kubernetes.io/projected/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-kube-api-access-p2sg6\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.647880 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-k5j4f"] Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.733820 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.781015 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k5j4f" event={"ID":"96118c9a-6b15-48a8-b6d9-a2146dc0182c","Type":"ContainerStarted","Data":"ddeb04687c203dbcf79ccda521dad1aa8f0eb575f81f570524eea4060b0e273e"} Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.798695 4874 generic.go:334] "Generic (PLEG): container finished" podID="877610c6-8111-4aff-b0ad-d699834accca" containerID="be3d888f496ea303d53ec8bba341c4f39ebc42c456785903453cbca83f71544d" exitCode=0 Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.798756 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" event={"ID":"877610c6-8111-4aff-b0ad-d699834accca","Type":"ContainerDied","Data":"be3d888f496ea303d53ec8bba341c4f39ebc42c456785903453cbca83f71544d"} Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.798777 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" event={"ID":"877610c6-8111-4aff-b0ad-d699834accca","Type":"ContainerStarted","Data":"8bcbcd0b4de42e5074d276aed4d3faa2a92b317f11a840ca37219138f42a4468"} Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.808480 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5tbz9" event={"ID":"dbb238f4-41a6-4299-8d81-887a0957e5d2","Type":"ContainerStarted","Data":"6ac4de15967e5a1ca2d21d97d0f657ccee810b77b3d1eb82b23daa78bc28a998"} Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.871439 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jrg8w"] Feb 17 16:24:10 crc kubenswrapper[4874]: W0217 16:24:10.876116 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10d748cd_cbae_4113_bfed_39c4511a879f.slice/crio-6347e70fd421109527d55a2986912873ede499f6f7abb719b6da2bf8698b292e WatchSource:0}: Error finding container 6347e70fd421109527d55a2986912873ede499f6f7abb719b6da2bf8698b292e: Status 404 returned error can't find the container with id 6347e70fd421109527d55a2986912873ede499f6f7abb719b6da2bf8698b292e Feb 17 16:24:10 crc kubenswrapper[4874]: I0217 16:24:10.940455 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.212143 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wdr7t"] Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.280949 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lw7kx"] Feb 17 16:24:11 crc kubenswrapper[4874]: W0217 16:24:11.307112 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46bb9425_0e75_4b58_b0f7_f7ad6998255b.slice/crio-afa98c87f1b39c478bb13b85f48adc0eb064885dac4ea7505353fe55352385cf WatchSource:0}: Error finding container afa98c87f1b39c478bb13b85f48adc0eb064885dac4ea7505353fe55352385cf: Status 404 returned error can't find the container with id afa98c87f1b39c478bb13b85f48adc0eb064885dac4ea7505353fe55352385cf Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.319539 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dhtc8"] Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.335629 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pfkph"] Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.409131 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.508932 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-sb\") pod \"877610c6-8111-4aff-b0ad-d699834accca\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.509007 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-nb\") pod \"877610c6-8111-4aff-b0ad-d699834accca\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.509342 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-config\") pod \"877610c6-8111-4aff-b0ad-d699834accca\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.509553 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-svc\") pod \"877610c6-8111-4aff-b0ad-d699834accca\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.509612 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-swift-storage-0\") pod \"877610c6-8111-4aff-b0ad-d699834accca\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.509667 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plkxk\" (UniqueName: \"kubernetes.io/projected/877610c6-8111-4aff-b0ad-d699834accca-kube-api-access-plkxk\") pod \"877610c6-8111-4aff-b0ad-d699834accca\" (UID: \"877610c6-8111-4aff-b0ad-d699834accca\") " Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.523254 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877610c6-8111-4aff-b0ad-d699834accca-kube-api-access-plkxk" (OuterVolumeSpecName: "kube-api-access-plkxk") pod "877610c6-8111-4aff-b0ad-d699834accca" (UID: "877610c6-8111-4aff-b0ad-d699834accca"). InnerVolumeSpecName "kube-api-access-plkxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.567292 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-config" (OuterVolumeSpecName: "config") pod "877610c6-8111-4aff-b0ad-d699834accca" (UID: "877610c6-8111-4aff-b0ad-d699834accca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.577621 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "877610c6-8111-4aff-b0ad-d699834accca" (UID: "877610c6-8111-4aff-b0ad-d699834accca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.582010 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "877610c6-8111-4aff-b0ad-d699834accca" (UID: "877610c6-8111-4aff-b0ad-d699834accca"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.588116 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.598571 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "877610c6-8111-4aff-b0ad-d699834accca" (UID: "877610c6-8111-4aff-b0ad-d699834accca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.606471 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.612286 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.612305 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.612315 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.612324 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.612332 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plkxk\" (UniqueName: \"kubernetes.io/projected/877610c6-8111-4aff-b0ad-d699834accca-kube-api-access-plkxk\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.650768 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "877610c6-8111-4aff-b0ad-d699834accca" (UID: "877610c6-8111-4aff-b0ad-d699834accca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.664576 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:11 crc kubenswrapper[4874]: W0217 16:24:11.673423 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod102fd3ea_d4d6_4ca5_81ff_a4bc35c2c67b.slice/crio-b963ebef1d3ff56f8e0b07a89bd01c3afee3d4d20a7a313fa2785d95366dafa5 WatchSource:0}: Error finding container b963ebef1d3ff56f8e0b07a89bd01c3afee3d4d20a7a313fa2785d95366dafa5: Status 404 returned error can't find the container with id b963ebef1d3ff56f8e0b07a89bd01c3afee3d4d20a7a313fa2785d95366dafa5 Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.682739 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.746824 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.747459 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877610c6-8111-4aff-b0ad-d699834accca-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.842602 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.849439 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jrg8w" event={"ID":"10d748cd-cbae-4113-bfed-39c4511a879f","Type":"ContainerStarted","Data":"6347e70fd421109527d55a2986912873ede499f6f7abb719b6da2bf8698b292e"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.852925 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dhtc8" event={"ID":"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c","Type":"ContainerStarted","Data":"25ee14ca0ad2658e6ae68fb44e0a6588b6d8863ee131f3bea69c9a6f7774365a"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.855792 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0df3ad69-92a9-4a61-9178-619f75dc6f98","Type":"ContainerStarted","Data":"e09d0e19ffbf990de6f62028148e62198653aaf8fe68fa32daeba09e0210ebf5"} Feb 17 16:24:11 crc kubenswrapper[4874]: W0217 16:24:11.860219 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc13c5e0_7260_47a6_8f5d_bdef0c815d32.slice/crio-7ed0ce68cce9ae18463655503dd14e168f72a7b03cacd528f4dad5db95cacb63 WatchSource:0}: Error finding container 7ed0ce68cce9ae18463655503dd14e168f72a7b03cacd528f4dad5db95cacb63: Status 404 returned error can't find the container with id 7ed0ce68cce9ae18463655503dd14e168f72a7b03cacd528f4dad5db95cacb63 Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.866425 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfkph" event={"ID":"46bb9425-0e75-4b58-b0f7-f7ad6998255b","Type":"ContainerStarted","Data":"fed82e020d9b641e58a9873a2d5a5407cabee53064d987fa9cac6d8298d4b1da"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.866469 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfkph" event={"ID":"46bb9425-0e75-4b58-b0f7-f7ad6998255b","Type":"ContainerStarted","Data":"afa98c87f1b39c478bb13b85f48adc0eb064885dac4ea7505353fe55352385cf"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.869758 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lw7kx" event={"ID":"dcf0f49a-5960-41a8-b699-8fb05241ee31","Type":"ContainerStarted","Data":"f8214dc24906039872e33da6fe633ee1f17957f14496b1f0a7b35af58e6ddcb9"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.873635 4874 generic.go:334] "Generic (PLEG): container finished" podID="865b8bd3-b179-4e75-a32e-0df273eac5e4" containerID="3d2c921ef76eeb6aa8b1054c759317b6353ff4b478bcb67ea7fc8aa591228e23" exitCode=0 Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.873692 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" event={"ID":"865b8bd3-b179-4e75-a32e-0df273eac5e4","Type":"ContainerDied","Data":"3d2c921ef76eeb6aa8b1054c759317b6353ff4b478bcb67ea7fc8aa591228e23"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.873710 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" event={"ID":"865b8bd3-b179-4e75-a32e-0df273eac5e4","Type":"ContainerStarted","Data":"023a1a4271634851f7dbf60447f7f4e36eec05b0b76bf49a588778c5c7b476e6"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.880889 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b","Type":"ContainerStarted","Data":"b963ebef1d3ff56f8e0b07a89bd01c3afee3d4d20a7a313fa2785d95366dafa5"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.887823 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" event={"ID":"877610c6-8111-4aff-b0ad-d699834accca","Type":"ContainerDied","Data":"8bcbcd0b4de42e5074d276aed4d3faa2a92b317f11a840ca37219138f42a4468"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.887872 4874 scope.go:117] "RemoveContainer" containerID="be3d888f496ea303d53ec8bba341c4f39ebc42c456785903453cbca83f71544d" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.887983 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7s4g2" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.905418 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5tbz9" event={"ID":"dbb238f4-41a6-4299-8d81-887a0957e5d2","Type":"ContainerStarted","Data":"18ec7ab0bd9add1ef4f51bfcd2a4d3060c430cfb4130a2dab2d3a469e25fbb17"} Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.917849 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-pfkph" podStartSLOduration=2.917827791 podStartE2EDuration="2.917827791s" podCreationTimestamp="2026-02-17 16:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:11.883877302 +0000 UTC m=+1262.178265873" watchObservedRunningTime="2026-02-17 16:24:11.917827791 +0000 UTC m=+1262.212216352" Feb 17 16:24:11 crc kubenswrapper[4874]: I0217 16:24:11.962872 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5tbz9" podStartSLOduration=3.962852271 podStartE2EDuration="3.962852271s" podCreationTimestamp="2026-02-17 16:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:11.936720543 +0000 UTC m=+1262.231109104" watchObservedRunningTime="2026-02-17 16:24:11.962852271 +0000 UTC m=+1262.257240842" Feb 17 16:24:12 crc kubenswrapper[4874]: I0217 16:24:12.057771 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7s4g2"] Feb 17 16:24:12 crc kubenswrapper[4874]: I0217 16:24:12.102649 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7s4g2"] Feb 17 16:24:12 crc kubenswrapper[4874]: I0217 16:24:12.474763 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877610c6-8111-4aff-b0ad-d699834accca" path="/var/lib/kubelet/pods/877610c6-8111-4aff-b0ad-d699834accca/volumes" Feb 17 16:24:12 crc kubenswrapper[4874]: I0217 16:24:12.930563 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" event={"ID":"865b8bd3-b179-4e75-a32e-0df273eac5e4","Type":"ContainerStarted","Data":"cbcb82ad49ece214cc4907d734212a1feed19f20ea36e6626aa160a259b2aaaa"} Feb 17 16:24:12 crc kubenswrapper[4874]: I0217 16:24:12.931096 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:12 crc kubenswrapper[4874]: I0217 16:24:12.936158 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b","Type":"ContainerStarted","Data":"c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695"} Feb 17 16:24:12 crc kubenswrapper[4874]: I0217 16:24:12.938391 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fc13c5e0-7260-47a6-8f5d-bdef0c815d32","Type":"ContainerStarted","Data":"7ed0ce68cce9ae18463655503dd14e168f72a7b03cacd528f4dad5db95cacb63"} Feb 17 16:24:12 crc kubenswrapper[4874]: I0217 16:24:12.952623 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" podStartSLOduration=3.9526066760000003 podStartE2EDuration="3.952606676s" podCreationTimestamp="2026-02-17 16:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:12.94867662 +0000 UTC m=+1263.243065181" watchObservedRunningTime="2026-02-17 16:24:12.952606676 +0000 UTC m=+1263.246995237" Feb 17 16:24:13 crc kubenswrapper[4874]: I0217 16:24:13.998366 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b","Type":"ContainerStarted","Data":"2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1"} Feb 17 16:24:13 crc kubenswrapper[4874]: I0217 16:24:13.998709 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerName="glance-log" containerID="cri-o://c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695" gracePeriod=30 Feb 17 16:24:13 crc kubenswrapper[4874]: I0217 16:24:13.998883 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerName="glance-httpd" containerID="cri-o://2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1" gracePeriod=30 Feb 17 16:24:14 crc kubenswrapper[4874]: I0217 16:24:14.010659 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerName="glance-log" containerID="cri-o://d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73" gracePeriod=30 Feb 17 16:24:14 crc kubenswrapper[4874]: I0217 16:24:14.010852 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerName="glance-httpd" containerID="cri-o://d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e" gracePeriod=30 Feb 17 16:24:14 crc kubenswrapper[4874]: I0217 16:24:14.010952 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fc13c5e0-7260-47a6-8f5d-bdef0c815d32","Type":"ContainerStarted","Data":"d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e"} Feb 17 16:24:14 crc kubenswrapper[4874]: I0217 16:24:14.010983 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fc13c5e0-7260-47a6-8f5d-bdef0c815d32","Type":"ContainerStarted","Data":"d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73"} Feb 17 16:24:14 crc kubenswrapper[4874]: I0217 16:24:14.040800 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.040783664 podStartE2EDuration="5.040783664s" podCreationTimestamp="2026-02-17 16:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:14.040348893 +0000 UTC m=+1264.334737464" watchObservedRunningTime="2026-02-17 16:24:14.040783664 +0000 UTC m=+1264.335172225" Feb 17 16:24:14 crc kubenswrapper[4874]: I0217 16:24:14.085563 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.085544837 podStartE2EDuration="5.085544837s" podCreationTimestamp="2026-02-17 16:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:14.081515629 +0000 UTC m=+1264.375904210" watchObservedRunningTime="2026-02-17 16:24:14.085544837 +0000 UTC m=+1264.379933408" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.003106 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.013884 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.060534 4874 generic.go:334] "Generic (PLEG): container finished" podID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerID="2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1" exitCode=143 Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.060835 4874 generic.go:334] "Generic (PLEG): container finished" podID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerID="c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695" exitCode=143 Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.060878 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b","Type":"ContainerDied","Data":"2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1"} Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.060912 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b","Type":"ContainerDied","Data":"c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695"} Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.060923 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b","Type":"ContainerDied","Data":"b963ebef1d3ff56f8e0b07a89bd01c3afee3d4d20a7a313fa2785d95366dafa5"} Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.060939 4874 scope.go:117] "RemoveContainer" containerID="2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.060976 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.072531 4874 generic.go:334] "Generic (PLEG): container finished" podID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerID="d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e" exitCode=143 Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.072560 4874 generic.go:334] "Generic (PLEG): container finished" podID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerID="d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73" exitCode=143 Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.072580 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fc13c5e0-7260-47a6-8f5d-bdef0c815d32","Type":"ContainerDied","Data":"d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e"} Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.072608 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fc13c5e0-7260-47a6-8f5d-bdef0c815d32","Type":"ContainerDied","Data":"d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73"} Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.072618 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fc13c5e0-7260-47a6-8f5d-bdef0c815d32","Type":"ContainerDied","Data":"7ed0ce68cce9ae18463655503dd14e168f72a7b03cacd528f4dad5db95cacb63"} Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.072669 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155085 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-httpd-run\") pod \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155190 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-internal-tls-certs\") pod \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155230 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-config-data\") pod \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155464 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155517 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-scripts\") pod \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155540 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2sg6\" (UniqueName: \"kubernetes.io/projected/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-kube-api-access-p2sg6\") pod \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155558 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-config-data\") pod \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155596 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-logs\") pod \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155584 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" (UID: "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155618 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-public-tls-certs\") pod \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155636 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-httpd-run\") pod \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155667 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvgj4\" (UniqueName: \"kubernetes.io/projected/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-kube-api-access-pvgj4\") pod \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155685 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-scripts\") pod \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155704 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-combined-ca-bundle\") pod \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155723 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-logs\") pod \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\" (UID: \"102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155805 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-combined-ca-bundle\") pod \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.155862 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\" (UID: \"fc13c5e0-7260-47a6-8f5d-bdef0c815d32\") " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.156339 4874 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.157821 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-logs" (OuterVolumeSpecName: "logs") pod "fc13c5e0-7260-47a6-8f5d-bdef0c815d32" (UID: "fc13c5e0-7260-47a6-8f5d-bdef0c815d32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.157856 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fc13c5e0-7260-47a6-8f5d-bdef0c815d32" (UID: "fc13c5e0-7260-47a6-8f5d-bdef0c815d32"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.158051 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-logs" (OuterVolumeSpecName: "logs") pod "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" (UID: "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.162114 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-kube-api-access-pvgj4" (OuterVolumeSpecName: "kube-api-access-pvgj4") pod "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" (UID: "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b"). InnerVolumeSpecName "kube-api-access-pvgj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.164533 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-scripts" (OuterVolumeSpecName: "scripts") pod "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" (UID: "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.169574 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-kube-api-access-p2sg6" (OuterVolumeSpecName: "kube-api-access-p2sg6") pod "fc13c5e0-7260-47a6-8f5d-bdef0c815d32" (UID: "fc13c5e0-7260-47a6-8f5d-bdef0c815d32"). InnerVolumeSpecName "kube-api-access-p2sg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.176613 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-scripts" (OuterVolumeSpecName: "scripts") pod "fc13c5e0-7260-47a6-8f5d-bdef0c815d32" (UID: "fc13c5e0-7260-47a6-8f5d-bdef0c815d32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.198850 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee" (OuterVolumeSpecName: "glance") pod "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" (UID: "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b"). InnerVolumeSpecName "pvc-41894387-c8f2-4994-9975-d3df0f7781ee". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.201146 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29" (OuterVolumeSpecName: "glance") pod "fc13c5e0-7260-47a6-8f5d-bdef0c815d32" (UID: "fc13c5e0-7260-47a6-8f5d-bdef0c815d32"). InnerVolumeSpecName "pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.207616 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" (UID: "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.212966 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc13c5e0-7260-47a6-8f5d-bdef0c815d32" (UID: "fc13c5e0-7260-47a6-8f5d-bdef0c815d32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.225731 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-config-data" (OuterVolumeSpecName: "config-data") pod "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" (UID: "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.235472 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" (UID: "102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.243467 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-config-data" (OuterVolumeSpecName: "config-data") pod "fc13c5e0-7260-47a6-8f5d-bdef0c815d32" (UID: "fc13c5e0-7260-47a6-8f5d-bdef0c815d32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.245352 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fc13c5e0-7260-47a6-8f5d-bdef0c815d32" (UID: "fc13c5e0-7260-47a6-8f5d-bdef0c815d32"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258669 4874 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258704 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258737 4874 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") on node \"crc\" " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258749 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258759 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2sg6\" (UniqueName: \"kubernetes.io/projected/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-kube-api-access-p2sg6\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258770 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258779 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258788 4874 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258796 4874 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258842 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvgj4\" (UniqueName: \"kubernetes.io/projected/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-kube-api-access-pvgj4\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258850 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258857 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258865 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258873 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc13c5e0-7260-47a6-8f5d-bdef0c815d32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.258888 4874 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") on node \"crc\" " Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.303280 4874 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.303465 4874 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29") on node "crc" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.307865 4874 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.308062 4874 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-41894387-c8f2-4994-9975-d3df0f7781ee" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee") on node "crc" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.363224 4874 reconciler_common.go:293] "Volume detached for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.363708 4874 reconciler_common.go:293] "Volume detached for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.408759 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.437310 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.452146 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.463266 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479106 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:15 crc kubenswrapper[4874]: E0217 16:24:15.479596 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerName="glance-httpd" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479613 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerName="glance-httpd" Feb 17 16:24:15 crc kubenswrapper[4874]: E0217 16:24:15.479626 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerName="glance-log" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479633 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerName="glance-log" Feb 17 16:24:15 crc kubenswrapper[4874]: E0217 16:24:15.479645 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877610c6-8111-4aff-b0ad-d699834accca" containerName="init" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479650 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="877610c6-8111-4aff-b0ad-d699834accca" containerName="init" Feb 17 16:24:15 crc kubenswrapper[4874]: E0217 16:24:15.479669 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerName="glance-log" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479675 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerName="glance-log" Feb 17 16:24:15 crc kubenswrapper[4874]: E0217 16:24:15.479688 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerName="glance-httpd" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479694 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerName="glance-httpd" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479879 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerName="glance-httpd" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479898 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="877610c6-8111-4aff-b0ad-d699834accca" containerName="init" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479910 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerName="glance-log" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479927 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" containerName="glance-httpd" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.479936 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" containerName="glance-log" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.481060 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.483523 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.483662 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8j7k" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.489559 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.489721 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.489730 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.504150 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.506316 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.509104 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.513501 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.517388 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.572035 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.572337 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.572495 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.572521 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n8tq\" (UniqueName: \"kubernetes.io/projected/57e36f0d-729f-4f32-9685-6453b7c550ac-kube-api-access-9n8tq\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.572545 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.572614 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-logs\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.572709 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.572781 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.678290 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.678597 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.678635 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldgb2\" (UniqueName: \"kubernetes.io/projected/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-kube-api-access-ldgb2\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.678698 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.678718 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8tq\" (UniqueName: \"kubernetes.io/projected/57e36f0d-729f-4f32-9685-6453b7c550ac-kube-api-access-9n8tq\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.678933 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.678980 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-logs\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679130 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679216 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679303 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679335 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679429 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679525 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679555 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679615 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679784 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.679981 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-logs\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.680297 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.682305 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.682379 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b268ef66cc41404bbecd9c0f528b347f586997c429bddc87782c10962eb32faa/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.685890 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-scripts\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.696311 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-config-data\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.696512 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.696891 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.704931 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8tq\" (UniqueName: \"kubernetes.io/projected/57e36f0d-729f-4f32-9685-6453b7c550ac-kube-api-access-9n8tq\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.724855 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.781393 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldgb2\" (UniqueName: \"kubernetes.io/projected/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-kube-api-access-ldgb2\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.781554 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.781581 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.781616 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.781638 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.781672 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.781731 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.781821 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.782283 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.783063 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.785317 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.785350 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1d818c0b3c780ebc1f2ad700eba392c18b331a7a76e8a2b3fde68119e55723e/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.785897 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.785927 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.786390 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.787148 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.798316 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldgb2\" (UniqueName: \"kubernetes.io/projected/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-kube-api-access-ldgb2\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.813319 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.840959 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:15 crc kubenswrapper[4874]: I0217 16:24:15.880000 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:16 crc kubenswrapper[4874]: I0217 16:24:16.093311 4874 generic.go:334] "Generic (PLEG): container finished" podID="dbb238f4-41a6-4299-8d81-887a0957e5d2" containerID="18ec7ab0bd9add1ef4f51bfcd2a4d3060c430cfb4130a2dab2d3a469e25fbb17" exitCode=0 Feb 17 16:24:16 crc kubenswrapper[4874]: I0217 16:24:16.093351 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5tbz9" event={"ID":"dbb238f4-41a6-4299-8d81-887a0957e5d2","Type":"ContainerDied","Data":"18ec7ab0bd9add1ef4f51bfcd2a4d3060c430cfb4130a2dab2d3a469e25fbb17"} Feb 17 16:24:16 crc kubenswrapper[4874]: I0217 16:24:16.469745 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b" path="/var/lib/kubelet/pods/102fd3ea-d4d6-4ca5-81ff-a4bc35c2c67b/volumes" Feb 17 16:24:16 crc kubenswrapper[4874]: I0217 16:24:16.470766 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc13c5e0-7260-47a6-8f5d-bdef0c815d32" path="/var/lib/kubelet/pods/fc13c5e0-7260-47a6-8f5d-bdef0c815d32/volumes" Feb 17 16:24:19 crc kubenswrapper[4874]: I0217 16:24:19.958220 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:24:20 crc kubenswrapper[4874]: I0217 16:24:20.087819 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-z9lkg"] Feb 17 16:24:20 crc kubenswrapper[4874]: I0217 16:24:20.088131 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerName="dnsmasq-dns" containerID="cri-o://948fd4a55c7ab06ef10bbf950ec52f4c6f033dd7de796b988ad2bfab487415ca" gracePeriod=10 Feb 17 16:24:21 crc kubenswrapper[4874]: I0217 16:24:21.166826 4874 generic.go:334] "Generic (PLEG): container finished" podID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerID="948fd4a55c7ab06ef10bbf950ec52f4c6f033dd7de796b988ad2bfab487415ca" exitCode=0 Feb 17 16:24:21 crc kubenswrapper[4874]: I0217 16:24:21.166870 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" event={"ID":"a8acd227-75af-40c7-ab98-b1e5b4bcab38","Type":"ContainerDied","Data":"948fd4a55c7ab06ef10bbf950ec52f4c6f033dd7de796b988ad2bfab487415ca"} Feb 17 16:24:23 crc kubenswrapper[4874]: I0217 16:24:23.261878 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: connect: connection refused" Feb 17 16:24:28 crc kubenswrapper[4874]: I0217 16:24:28.263609 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: connect: connection refused" Feb 17 16:24:28 crc kubenswrapper[4874]: E0217 16:24:28.403809 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 17 16:24:28 crc kubenswrapper[4874]: E0217 16:24:28.404237 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt647,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-lw7kx_openstack(dcf0f49a-5960-41a8-b699-8fb05241ee31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:24:28 crc kubenswrapper[4874]: E0217 16:24:28.405695 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-lw7kx" podUID="dcf0f49a-5960-41a8-b699-8fb05241ee31" Feb 17 16:24:28 crc kubenswrapper[4874]: E0217 16:24:28.831506 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 17 16:24:28 crc kubenswrapper[4874]: E0217 16:24:28.831708 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n89h5ddh595h5d6hbdhd6h7ch56dh5d7h595h8h577h56fhcdh64hf7h685h5c7h584h68ch56bh55bh55fh54dhbfh5cfhcch5d8h67bh5fch67chdbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgg4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0df3ad69-92a9-4a61-9178-619f75dc6f98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:24:29 crc kubenswrapper[4874]: E0217 16:24:29.286676 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-lw7kx" podUID="dcf0f49a-5960-41a8-b699-8fb05241ee31" Feb 17 16:24:29 crc kubenswrapper[4874]: E0217 16:24:29.406653 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 16:24:29 crc kubenswrapper[4874]: E0217 16:24:29.406799 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzzsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-dhtc8_openstack(a4a96348-a1c6-4470-ad3a-d87cc20c8d3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:24:29 crc kubenswrapper[4874]: E0217 16:24:29.407959 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-dhtc8" podUID="a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" Feb 17 16:24:29 crc kubenswrapper[4874]: E0217 16:24:29.837640 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 17 16:24:29 crc kubenswrapper[4874]: E0217 16:24:29.838529 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9l9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k5j4f_openstack(96118c9a-6b15-48a8-b6d9-a2146dc0182c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:24:29 crc kubenswrapper[4874]: E0217 16:24:29.839818 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-k5j4f" podUID="96118c9a-6b15-48a8-b6d9-a2146dc0182c" Feb 17 16:24:30 crc kubenswrapper[4874]: E0217 16:24:30.299970 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-k5j4f" podUID="96118c9a-6b15-48a8-b6d9-a2146dc0182c" Feb 17 16:24:30 crc kubenswrapper[4874]: E0217 16:24:30.300629 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-dhtc8" podUID="a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.056859 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.067021 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135494 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-nb\") pod \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135683 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-swift-storage-0\") pod \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135734 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-config\") pod \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135775 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-scripts\") pod \"dbb238f4-41a6-4299-8d81-887a0957e5d2\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135808 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-credential-keys\") pod \"dbb238f4-41a6-4299-8d81-887a0957e5d2\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135843 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r64jv\" (UniqueName: \"kubernetes.io/projected/dbb238f4-41a6-4299-8d81-887a0957e5d2-kube-api-access-r64jv\") pod \"dbb238f4-41a6-4299-8d81-887a0957e5d2\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135869 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-fernet-keys\") pod \"dbb238f4-41a6-4299-8d81-887a0957e5d2\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135912 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-svc\") pod \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135934 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-combined-ca-bundle\") pod \"dbb238f4-41a6-4299-8d81-887a0957e5d2\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.135966 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-config-data\") pod \"dbb238f4-41a6-4299-8d81-887a0957e5d2\" (UID: \"dbb238f4-41a6-4299-8d81-887a0957e5d2\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.136009 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpd24\" (UniqueName: \"kubernetes.io/projected/a8acd227-75af-40c7-ab98-b1e5b4bcab38-kube-api-access-zpd24\") pod \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.136123 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-sb\") pod \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\" (UID: \"a8acd227-75af-40c7-ab98-b1e5b4bcab38\") " Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.159324 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dbb238f4-41a6-4299-8d81-887a0957e5d2" (UID: "dbb238f4-41a6-4299-8d81-887a0957e5d2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.159443 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-scripts" (OuterVolumeSpecName: "scripts") pod "dbb238f4-41a6-4299-8d81-887a0957e5d2" (UID: "dbb238f4-41a6-4299-8d81-887a0957e5d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.160160 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb238f4-41a6-4299-8d81-887a0957e5d2-kube-api-access-r64jv" (OuterVolumeSpecName: "kube-api-access-r64jv") pod "dbb238f4-41a6-4299-8d81-887a0957e5d2" (UID: "dbb238f4-41a6-4299-8d81-887a0957e5d2"). InnerVolumeSpecName "kube-api-access-r64jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.165272 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dbb238f4-41a6-4299-8d81-887a0957e5d2" (UID: "dbb238f4-41a6-4299-8d81-887a0957e5d2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.165684 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8acd227-75af-40c7-ab98-b1e5b4bcab38-kube-api-access-zpd24" (OuterVolumeSpecName: "kube-api-access-zpd24") pod "a8acd227-75af-40c7-ab98-b1e5b4bcab38" (UID: "a8acd227-75af-40c7-ab98-b1e5b4bcab38"). InnerVolumeSpecName "kube-api-access-zpd24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.205233 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbb238f4-41a6-4299-8d81-887a0957e5d2" (UID: "dbb238f4-41a6-4299-8d81-887a0957e5d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.229203 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-config-data" (OuterVolumeSpecName: "config-data") pod "dbb238f4-41a6-4299-8d81-887a0957e5d2" (UID: "dbb238f4-41a6-4299-8d81-887a0957e5d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.237703 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a8acd227-75af-40c7-ab98-b1e5b4bcab38" (UID: "a8acd227-75af-40c7-ab98-b1e5b4bcab38"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.239209 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.239231 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpd24\" (UniqueName: \"kubernetes.io/projected/a8acd227-75af-40c7-ab98-b1e5b4bcab38-kube-api-access-zpd24\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.239241 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.239249 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.239257 4874 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.239267 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r64jv\" (UniqueName: \"kubernetes.io/projected/dbb238f4-41a6-4299-8d81-887a0957e5d2-kube-api-access-r64jv\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.239277 4874 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.239284 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb238f4-41a6-4299-8d81-887a0957e5d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.244861 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a8acd227-75af-40c7-ab98-b1e5b4bcab38" (UID: "a8acd227-75af-40c7-ab98-b1e5b4bcab38"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.245764 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-config" (OuterVolumeSpecName: "config") pod "a8acd227-75af-40c7-ab98-b1e5b4bcab38" (UID: "a8acd227-75af-40c7-ab98-b1e5b4bcab38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.258799 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a8acd227-75af-40c7-ab98-b1e5b4bcab38" (UID: "a8acd227-75af-40c7-ab98-b1e5b4bcab38"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.261890 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: i/o timeout" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.261973 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.268682 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8acd227-75af-40c7-ab98-b1e5b4bcab38" (UID: "a8acd227-75af-40c7-ab98-b1e5b4bcab38"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.341330 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.341367 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.341379 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.341387 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8acd227-75af-40c7-ab98-b1e5b4bcab38-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.374219 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5tbz9" event={"ID":"dbb238f4-41a6-4299-8d81-887a0957e5d2","Type":"ContainerDied","Data":"6ac4de15967e5a1ca2d21d97d0f657ccee810b77b3d1eb82b23daa78bc28a998"} Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.374259 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ac4de15967e5a1ca2d21d97d0f657ccee810b77b3d1eb82b23daa78bc28a998" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.374286 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5tbz9" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.377232 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" event={"ID":"a8acd227-75af-40c7-ab98-b1e5b4bcab38","Type":"ContainerDied","Data":"8965461e20ae4d688bf10a4a62b9d08ade35e9f329f6bb68229fad27b0661643"} Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.377300 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-z9lkg" Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.420906 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-z9lkg"] Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.429169 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-z9lkg"] Feb 17 16:24:38 crc kubenswrapper[4874]: I0217 16:24:38.474859 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" path="/var/lib/kubelet/pods/a8acd227-75af-40c7-ab98-b1e5b4bcab38/volumes" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.159342 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5tbz9"] Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.168817 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5tbz9"] Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.251259 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6cx5g"] Feb 17 16:24:39 crc kubenswrapper[4874]: E0217 16:24:39.252296 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerName="init" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.252322 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerName="init" Feb 17 16:24:39 crc kubenswrapper[4874]: E0217 16:24:39.252348 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb238f4-41a6-4299-8d81-887a0957e5d2" containerName="keystone-bootstrap" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.252355 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb238f4-41a6-4299-8d81-887a0957e5d2" containerName="keystone-bootstrap" Feb 17 16:24:39 crc kubenswrapper[4874]: E0217 16:24:39.252374 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerName="dnsmasq-dns" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.252382 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerName="dnsmasq-dns" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.252635 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8acd227-75af-40c7-ab98-b1e5b4bcab38" containerName="dnsmasq-dns" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.252662 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb238f4-41a6-4299-8d81-887a0957e5d2" containerName="keystone-bootstrap" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.253726 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.260938 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7tb22" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.261149 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.261312 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.261523 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.261726 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.278189 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6cx5g"] Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.368586 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-config-data\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.368666 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-fernet-keys\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.368918 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-scripts\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.368986 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-combined-ca-bundle\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.369272 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6x2f\" (UniqueName: \"kubernetes.io/projected/676bf17d-3f3b-4159-97c3-7c1c51147145-kube-api-access-d6x2f\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.369312 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-credential-keys\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.471355 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6x2f\" (UniqueName: \"kubernetes.io/projected/676bf17d-3f3b-4159-97c3-7c1c51147145-kube-api-access-d6x2f\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.471402 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-credential-keys\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.471504 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-config-data\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.471536 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-fernet-keys\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.471620 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-scripts\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.471649 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-combined-ca-bundle\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.476547 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-combined-ca-bundle\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.478671 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-fernet-keys\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.479568 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-config-data\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.486591 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-credential-keys\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.489397 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-scripts\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.492343 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6x2f\" (UniqueName: \"kubernetes.io/projected/676bf17d-3f3b-4159-97c3-7c1c51147145-kube-api-access-d6x2f\") pod \"keystone-bootstrap-6cx5g\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.540690 4874 scope.go:117] "RemoveContainer" containerID="c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.576233 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:39 crc kubenswrapper[4874]: E0217 16:24:39.616093 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 16:24:39 crc kubenswrapper[4874]: E0217 16:24:39.616270 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nf2d9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-jrg8w_openstack(10d748cd-cbae-4113-bfed-39c4511a879f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:24:39 crc kubenswrapper[4874]: E0217 16:24:39.617489 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-jrg8w" podUID="10d748cd-cbae-4113-bfed-39c4511a879f" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.910607 4874 scope.go:117] "RemoveContainer" containerID="2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1" Feb 17 16:24:39 crc kubenswrapper[4874]: E0217 16:24:39.911273 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1\": container with ID starting with 2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1 not found: ID does not exist" containerID="2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.911304 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1"} err="failed to get container status \"2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1\": rpc error: code = NotFound desc = could not find container \"2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1\": container with ID starting with 2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1 not found: ID does not exist" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.911331 4874 scope.go:117] "RemoveContainer" containerID="c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695" Feb 17 16:24:39 crc kubenswrapper[4874]: E0217 16:24:39.911525 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695\": container with ID starting with c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695 not found: ID does not exist" containerID="c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.911549 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695"} err="failed to get container status \"c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695\": rpc error: code = NotFound desc = could not find container \"c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695\": container with ID starting with c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695 not found: ID does not exist" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.911562 4874 scope.go:117] "RemoveContainer" containerID="2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.911748 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1"} err="failed to get container status \"2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1\": rpc error: code = NotFound desc = could not find container \"2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1\": container with ID starting with 2fc05c44769091602071d271bd065dadb82cc721f3dfbfe477ec6a04a5d177a1 not found: ID does not exist" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.911766 4874 scope.go:117] "RemoveContainer" containerID="c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.911989 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695"} err="failed to get container status \"c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695\": rpc error: code = NotFound desc = could not find container \"c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695\": container with ID starting with c92e1f8322644c644a4ece6e62daa0ce1c839baba70ebe251f513334653ac695 not found: ID does not exist" Feb 17 16:24:39 crc kubenswrapper[4874]: I0217 16:24:39.912069 4874 scope.go:117] "RemoveContainer" containerID="d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.174609 4874 scope.go:117] "RemoveContainer" containerID="d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.298229 4874 scope.go:117] "RemoveContainer" containerID="d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e" Feb 17 16:24:40 crc kubenswrapper[4874]: E0217 16:24:40.298661 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e\": container with ID starting with d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e not found: ID does not exist" containerID="d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.298703 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e"} err="failed to get container status \"d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e\": rpc error: code = NotFound desc = could not find container \"d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e\": container with ID starting with d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e not found: ID does not exist" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.298739 4874 scope.go:117] "RemoveContainer" containerID="d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73" Feb 17 16:24:40 crc kubenswrapper[4874]: E0217 16:24:40.299145 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73\": container with ID starting with d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73 not found: ID does not exist" containerID="d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.299192 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73"} err="failed to get container status \"d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73\": rpc error: code = NotFound desc = could not find container \"d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73\": container with ID starting with d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73 not found: ID does not exist" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.299229 4874 scope.go:117] "RemoveContainer" containerID="d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.299523 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e"} err="failed to get container status \"d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e\": rpc error: code = NotFound desc = could not find container \"d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e\": container with ID starting with d4d1d7951cf97c164a99f5a4e2d0bfb8c312f10dc4fdb624a7f398023145428e not found: ID does not exist" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.299542 4874 scope.go:117] "RemoveContainer" containerID="d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.299733 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73"} err="failed to get container status \"d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73\": rpc error: code = NotFound desc = could not find container \"d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73\": container with ID starting with d4e35323655fb45453cf967fa14223badd1e96e0bc3d6caa6a1d023dd2026f73 not found: ID does not exist" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.299752 4874 scope.go:117] "RemoveContainer" containerID="948fd4a55c7ab06ef10bbf950ec52f4c6f033dd7de796b988ad2bfab487415ca" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.340774 4874 scope.go:117] "RemoveContainer" containerID="b60ff46a8dc99b2af9dbb03f7c00addbc13ef5decd8fc0dbe2b9c0cd2bec5cd4" Feb 17 16:24:40 crc kubenswrapper[4874]: E0217 16:24:40.403002 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-jrg8w" podUID="10d748cd-cbae-4113-bfed-39c4511a879f" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.474251 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb238f4-41a6-4299-8d81-887a0957e5d2" path="/var/lib/kubelet/pods/dbb238f4-41a6-4299-8d81-887a0957e5d2/volumes" Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.506736 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.548189 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6cx5g"] Feb 17 16:24:40 crc kubenswrapper[4874]: W0217 16:24:40.570531 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod676bf17d_3f3b_4159_97c3_7c1c51147145.slice/crio-c10ba799958bd0ff6f602af4e7fb098f610caa50150482a6b518f9bb3973eaab WatchSource:0}: Error finding container c10ba799958bd0ff6f602af4e7fb098f610caa50150482a6b518f9bb3973eaab: Status 404 returned error can't find the container with id c10ba799958bd0ff6f602af4e7fb098f610caa50150482a6b518f9bb3973eaab Feb 17 16:24:40 crc kubenswrapper[4874]: I0217 16:24:40.601407 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:40 crc kubenswrapper[4874]: W0217 16:24:40.613129 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb1babe8_fc1e_42fe_ad26_3c627c6bc73f.slice/crio-2348b160e3c21f44b17a14ecbb385c58129294adb0b5d2b6739f0d3605997206 WatchSource:0}: Error finding container 2348b160e3c21f44b17a14ecbb385c58129294adb0b5d2b6739f0d3605997206: Status 404 returned error can't find the container with id 2348b160e3c21f44b17a14ecbb385c58129294adb0b5d2b6739f0d3605997206 Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.421698 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57e36f0d-729f-4f32-9685-6453b7c550ac","Type":"ContainerStarted","Data":"e2e62e569ab2a91fa0d6b81c17a0c32ecbc4bc391e57ad6e0d937471cd1196d1"} Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.421926 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57e36f0d-729f-4f32-9685-6453b7c550ac","Type":"ContainerStarted","Data":"fc0a8dec5b627fb6fb88c09afd086bc5a1178fa425baa0bbc9c9dad7efc8269e"} Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.424577 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0df3ad69-92a9-4a61-9178-619f75dc6f98","Type":"ContainerStarted","Data":"1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13"} Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.426089 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6cx5g" event={"ID":"676bf17d-3f3b-4159-97c3-7c1c51147145","Type":"ContainerStarted","Data":"5de00b7c15cf252659e12fd6c7b7320c95cf306f66e8964ceeaac586532f0f2e"} Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.426118 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6cx5g" event={"ID":"676bf17d-3f3b-4159-97c3-7c1c51147145","Type":"ContainerStarted","Data":"c10ba799958bd0ff6f602af4e7fb098f610caa50150482a6b518f9bb3973eaab"} Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.432832 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f","Type":"ContainerStarted","Data":"955b09375e4d1b05269a4a63a1baf8be8a1f1e4d8f5cb5b200dc59a9a2f74b3a"} Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.432900 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f","Type":"ContainerStarted","Data":"2348b160e3c21f44b17a14ecbb385c58129294adb0b5d2b6739f0d3605997206"} Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.436430 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lw7kx" event={"ID":"dcf0f49a-5960-41a8-b699-8fb05241ee31","Type":"ContainerStarted","Data":"89218ccd87f4019be0a58e5d6563f00d652f6e0d1558057eb3af797423104580"} Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.449546 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6cx5g" podStartSLOduration=2.449523395 podStartE2EDuration="2.449523395s" podCreationTimestamp="2026-02-17 16:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:41.442749019 +0000 UTC m=+1291.737137600" watchObservedRunningTime="2026-02-17 16:24:41.449523395 +0000 UTC m=+1291.743911966" Feb 17 16:24:41 crc kubenswrapper[4874]: I0217 16:24:41.463872 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lw7kx" podStartSLOduration=3.53479022 podStartE2EDuration="32.463856355s" podCreationTimestamp="2026-02-17 16:24:09 +0000 UTC" firstStartedPulling="2026-02-17 16:24:11.246649707 +0000 UTC m=+1261.541038268" lastFinishedPulling="2026-02-17 16:24:40.175715842 +0000 UTC m=+1290.470104403" observedRunningTime="2026-02-17 16:24:41.459912649 +0000 UTC m=+1291.754301210" watchObservedRunningTime="2026-02-17 16:24:41.463856355 +0000 UTC m=+1291.758244916" Feb 17 16:24:42 crc kubenswrapper[4874]: I0217 16:24:42.449783 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f","Type":"ContainerStarted","Data":"a101e4938d0685428284db4ed1f088160a322daf589c50dcb3efe6ef955984f2"} Feb 17 16:24:42 crc kubenswrapper[4874]: I0217 16:24:42.452193 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57e36f0d-729f-4f32-9685-6453b7c550ac","Type":"ContainerStarted","Data":"c1c4a059e0bfc37b5cfb12008e01da9499f8ed035d0920acdcb712f5697767bf"} Feb 17 16:24:42 crc kubenswrapper[4874]: I0217 16:24:42.535620 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=27.535604952 podStartE2EDuration="27.535604952s" podCreationTimestamp="2026-02-17 16:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:42.488669747 +0000 UTC m=+1292.783058308" watchObservedRunningTime="2026-02-17 16:24:42.535604952 +0000 UTC m=+1292.829993513" Feb 17 16:24:42 crc kubenswrapper[4874]: I0217 16:24:42.586134 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=27.586109245 podStartE2EDuration="27.586109245s" podCreationTimestamp="2026-02-17 16:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:42.566666111 +0000 UTC m=+1292.861054672" watchObservedRunningTime="2026-02-17 16:24:42.586109245 +0000 UTC m=+1292.880497816" Feb 17 16:24:44 crc kubenswrapper[4874]: I0217 16:24:44.473559 4874 generic.go:334] "Generic (PLEG): container finished" podID="676bf17d-3f3b-4159-97c3-7c1c51147145" containerID="5de00b7c15cf252659e12fd6c7b7320c95cf306f66e8964ceeaac586532f0f2e" exitCode=0 Feb 17 16:24:44 crc kubenswrapper[4874]: I0217 16:24:44.473654 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6cx5g" event={"ID":"676bf17d-3f3b-4159-97c3-7c1c51147145","Type":"ContainerDied","Data":"5de00b7c15cf252659e12fd6c7b7320c95cf306f66e8964ceeaac586532f0f2e"} Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.485404 4874 generic.go:334] "Generic (PLEG): container finished" podID="dcf0f49a-5960-41a8-b699-8fb05241ee31" containerID="89218ccd87f4019be0a58e5d6563f00d652f6e0d1558057eb3af797423104580" exitCode=0 Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.485550 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lw7kx" event={"ID":"dcf0f49a-5960-41a8-b699-8fb05241ee31","Type":"ContainerDied","Data":"89218ccd87f4019be0a58e5d6563f00d652f6e0d1558057eb3af797423104580"} Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.814051 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.814540 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.814554 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.814568 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.851014 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.863335 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.881024 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.881427 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.881483 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.881497 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.940960 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:45 crc kubenswrapper[4874]: I0217 16:24:45.941470 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.108428 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.126138 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-config-data\") pod \"676bf17d-3f3b-4159-97c3-7c1c51147145\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.126235 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6x2f\" (UniqueName: \"kubernetes.io/projected/676bf17d-3f3b-4159-97c3-7c1c51147145-kube-api-access-d6x2f\") pod \"676bf17d-3f3b-4159-97c3-7c1c51147145\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.126292 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-combined-ca-bundle\") pod \"676bf17d-3f3b-4159-97c3-7c1c51147145\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.126475 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-fernet-keys\") pod \"676bf17d-3f3b-4159-97c3-7c1c51147145\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.126586 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-scripts\") pod \"676bf17d-3f3b-4159-97c3-7c1c51147145\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.126764 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-credential-keys\") pod \"676bf17d-3f3b-4159-97c3-7c1c51147145\" (UID: \"676bf17d-3f3b-4159-97c3-7c1c51147145\") " Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.135847 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "676bf17d-3f3b-4159-97c3-7c1c51147145" (UID: "676bf17d-3f3b-4159-97c3-7c1c51147145"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.138321 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "676bf17d-3f3b-4159-97c3-7c1c51147145" (UID: "676bf17d-3f3b-4159-97c3-7c1c51147145"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.138941 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-scripts" (OuterVolumeSpecName: "scripts") pod "676bf17d-3f3b-4159-97c3-7c1c51147145" (UID: "676bf17d-3f3b-4159-97c3-7c1c51147145"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.143290 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676bf17d-3f3b-4159-97c3-7c1c51147145-kube-api-access-d6x2f" (OuterVolumeSpecName: "kube-api-access-d6x2f") pod "676bf17d-3f3b-4159-97c3-7c1c51147145" (UID: "676bf17d-3f3b-4159-97c3-7c1c51147145"). InnerVolumeSpecName "kube-api-access-d6x2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.174189 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-config-data" (OuterVolumeSpecName: "config-data") pod "676bf17d-3f3b-4159-97c3-7c1c51147145" (UID: "676bf17d-3f3b-4159-97c3-7c1c51147145"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.210192 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "676bf17d-3f3b-4159-97c3-7c1c51147145" (UID: "676bf17d-3f3b-4159-97c3-7c1c51147145"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.229203 4874 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.229244 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.229266 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6x2f\" (UniqueName: \"kubernetes.io/projected/676bf17d-3f3b-4159-97c3-7c1c51147145-kube-api-access-d6x2f\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.229281 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.229293 4874 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.229305 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/676bf17d-3f3b-4159-97c3-7c1c51147145-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.510619 4874 generic.go:334] "Generic (PLEG): container finished" podID="46bb9425-0e75-4b58-b0f7-f7ad6998255b" containerID="fed82e020d9b641e58a9873a2d5a5407cabee53064d987fa9cac6d8298d4b1da" exitCode=0 Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.510658 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfkph" event={"ID":"46bb9425-0e75-4b58-b0f7-f7ad6998255b","Type":"ContainerDied","Data":"fed82e020d9b641e58a9873a2d5a5407cabee53064d987fa9cac6d8298d4b1da"} Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.516612 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dhtc8" event={"ID":"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c","Type":"ContainerStarted","Data":"77cdcf6bdc0227dfe7b19a34bfd72fddf68434979061a126395ac7d9c23d3534"} Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.519190 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0df3ad69-92a9-4a61-9178-619f75dc6f98","Type":"ContainerStarted","Data":"f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb"} Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.524711 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6cx5g" event={"ID":"676bf17d-3f3b-4159-97c3-7c1c51147145","Type":"ContainerDied","Data":"c10ba799958bd0ff6f602af4e7fb098f610caa50150482a6b518f9bb3973eaab"} Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.524743 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c10ba799958bd0ff6f602af4e7fb098f610caa50150482a6b518f9bb3973eaab" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.524799 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6cx5g" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.528282 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k5j4f" event={"ID":"96118c9a-6b15-48a8-b6d9-a2146dc0182c","Type":"ContainerStarted","Data":"77ef09ba26fdd2e92436f06fe8cd8993b60b4e40e13de49726732fd41ac660e4"} Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.578797 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-k5j4f" podStartSLOduration=2.124432573 podStartE2EDuration="37.578775337s" podCreationTimestamp="2026-02-17 16:24:09 +0000 UTC" firstStartedPulling="2026-02-17 16:24:10.680491799 +0000 UTC m=+1260.974880360" lastFinishedPulling="2026-02-17 16:24:46.134834563 +0000 UTC m=+1296.429223124" observedRunningTime="2026-02-17 16:24:46.554633717 +0000 UTC m=+1296.849022278" watchObservedRunningTime="2026-02-17 16:24:46.578775337 +0000 UTC m=+1296.873163918" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.604867 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dhtc8" podStartSLOduration=2.7625817980000003 podStartE2EDuration="37.604849593s" podCreationTimestamp="2026-02-17 16:24:09 +0000 UTC" firstStartedPulling="2026-02-17 16:24:11.295170962 +0000 UTC m=+1261.589559523" lastFinishedPulling="2026-02-17 16:24:46.137438757 +0000 UTC m=+1296.431827318" observedRunningTime="2026-02-17 16:24:46.587719815 +0000 UTC m=+1296.882108386" watchObservedRunningTime="2026-02-17 16:24:46.604849593 +0000 UTC m=+1296.899238144" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.608087 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-567c8c9c6c-dn66l"] Feb 17 16:24:46 crc kubenswrapper[4874]: E0217 16:24:46.608507 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676bf17d-3f3b-4159-97c3-7c1c51147145" containerName="keystone-bootstrap" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.608525 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="676bf17d-3f3b-4159-97c3-7c1c51147145" containerName="keystone-bootstrap" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.608738 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="676bf17d-3f3b-4159-97c3-7c1c51147145" containerName="keystone-bootstrap" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.609399 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.624179 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-567c8c9c6c-dn66l"] Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.636812 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.637212 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.637355 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.637513 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.637558 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.637828 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7tb22" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.747797 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-combined-ca-bundle\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.747854 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-public-tls-certs\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.747996 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-scripts\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.748013 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-fernet-keys\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.748040 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-internal-tls-certs\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.748101 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-config-data\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.748122 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-credential-keys\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.748144 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xq5l\" (UniqueName: \"kubernetes.io/projected/4a0c2f24-e449-460d-8bcd-269d5ee4994f-kube-api-access-5xq5l\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.850495 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-scripts\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.850538 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-fernet-keys\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.850567 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-internal-tls-certs\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.850609 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-config-data\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.850628 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-credential-keys\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.850649 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xq5l\" (UniqueName: \"kubernetes.io/projected/4a0c2f24-e449-460d-8bcd-269d5ee4994f-kube-api-access-5xq5l\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.850709 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-combined-ca-bundle\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.850734 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-public-tls-certs\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.872912 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-combined-ca-bundle\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.874534 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-public-tls-certs\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.884793 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-config-data\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.885282 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-fernet-keys\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.885947 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-scripts\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.886597 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-credential-keys\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.893189 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a0c2f24-e449-460d-8bcd-269d5ee4994f-internal-tls-certs\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.929440 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xq5l\" (UniqueName: \"kubernetes.io/projected/4a0c2f24-e449-460d-8bcd-269d5ee4994f-kube-api-access-5xq5l\") pod \"keystone-567c8c9c6c-dn66l\" (UID: \"4a0c2f24-e449-460d-8bcd-269d5ee4994f\") " pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:46 crc kubenswrapper[4874]: I0217 16:24:46.950230 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.193250 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.260685 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt647\" (UniqueName: \"kubernetes.io/projected/dcf0f49a-5960-41a8-b699-8fb05241ee31-kube-api-access-nt647\") pod \"dcf0f49a-5960-41a8-b699-8fb05241ee31\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.260826 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-config-data\") pod \"dcf0f49a-5960-41a8-b699-8fb05241ee31\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.260910 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-combined-ca-bundle\") pod \"dcf0f49a-5960-41a8-b699-8fb05241ee31\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.260976 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf0f49a-5960-41a8-b699-8fb05241ee31-logs\") pod \"dcf0f49a-5960-41a8-b699-8fb05241ee31\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.261013 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-scripts\") pod \"dcf0f49a-5960-41a8-b699-8fb05241ee31\" (UID: \"dcf0f49a-5960-41a8-b699-8fb05241ee31\") " Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.261833 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcf0f49a-5960-41a8-b699-8fb05241ee31-logs" (OuterVolumeSpecName: "logs") pod "dcf0f49a-5960-41a8-b699-8fb05241ee31" (UID: "dcf0f49a-5960-41a8-b699-8fb05241ee31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.266562 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-scripts" (OuterVolumeSpecName: "scripts") pod "dcf0f49a-5960-41a8-b699-8fb05241ee31" (UID: "dcf0f49a-5960-41a8-b699-8fb05241ee31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.277305 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf0f49a-5960-41a8-b699-8fb05241ee31-kube-api-access-nt647" (OuterVolumeSpecName: "kube-api-access-nt647") pod "dcf0f49a-5960-41a8-b699-8fb05241ee31" (UID: "dcf0f49a-5960-41a8-b699-8fb05241ee31"). InnerVolumeSpecName "kube-api-access-nt647". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.321535 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcf0f49a-5960-41a8-b699-8fb05241ee31" (UID: "dcf0f49a-5960-41a8-b699-8fb05241ee31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.369394 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.369441 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf0f49a-5960-41a8-b699-8fb05241ee31-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.369453 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.369463 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt647\" (UniqueName: \"kubernetes.io/projected/dcf0f49a-5960-41a8-b699-8fb05241ee31-kube-api-access-nt647\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.397065 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-config-data" (OuterVolumeSpecName: "config-data") pod "dcf0f49a-5960-41a8-b699-8fb05241ee31" (UID: "dcf0f49a-5960-41a8-b699-8fb05241ee31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.415288 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-567c8c9c6c-dn66l"] Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.471667 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf0f49a-5960-41a8-b699-8fb05241ee31-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.551007 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lw7kx" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.551008 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lw7kx" event={"ID":"dcf0f49a-5960-41a8-b699-8fb05241ee31","Type":"ContainerDied","Data":"f8214dc24906039872e33da6fe633ee1f17957f14496b1f0a7b35af58e6ddcb9"} Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.555730 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8214dc24906039872e33da6fe633ee1f17957f14496b1f0a7b35af58e6ddcb9" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.555754 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-567c8c9c6c-dn66l" event={"ID":"4a0c2f24-e449-460d-8bcd-269d5ee4994f","Type":"ContainerStarted","Data":"0e2b617cad8c9ffb2172ca543993498c27432e98a1142075b4e7b4c1701a6d84"} Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.814207 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ffffff886-rsf5g"] Feb 17 16:24:47 crc kubenswrapper[4874]: E0217 16:24:47.814850 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf0f49a-5960-41a8-b699-8fb05241ee31" containerName="placement-db-sync" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.814867 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf0f49a-5960-41a8-b699-8fb05241ee31" containerName="placement-db-sync" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.815046 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf0f49a-5960-41a8-b699-8fb05241ee31" containerName="placement-db-sync" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.816224 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.821658 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cmjcc" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.821846 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.821970 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.822091 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.822284 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.857217 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ffffff886-rsf5g"] Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.892766 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-public-tls-certs\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.892832 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-combined-ca-bundle\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.892875 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e74f73-675f-46bf-8a70-cd1101995839-logs\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.892906 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-scripts\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.892931 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-internal-tls-certs\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.892968 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-config-data\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.893024 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmb4t\" (UniqueName: \"kubernetes.io/projected/f9e74f73-675f-46bf-8a70-cd1101995839-kube-api-access-fmb4t\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.963213 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.995192 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmb4t\" (UniqueName: \"kubernetes.io/projected/f9e74f73-675f-46bf-8a70-cd1101995839-kube-api-access-fmb4t\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.995554 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-public-tls-certs\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.995692 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-combined-ca-bundle\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.995790 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e74f73-675f-46bf-8a70-cd1101995839-logs\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.995882 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-scripts\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.995958 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-internal-tls-certs\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:47 crc kubenswrapper[4874]: I0217 16:24:47.996054 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-config-data\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.000584 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e74f73-675f-46bf-8a70-cd1101995839-logs\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.001269 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-scripts\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.006879 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-combined-ca-bundle\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.009747 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-internal-tls-certs\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.015221 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-public-tls-certs\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.017008 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e74f73-675f-46bf-8a70-cd1101995839-config-data\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.051584 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmb4t\" (UniqueName: \"kubernetes.io/projected/f9e74f73-675f-46bf-8a70-cd1101995839-kube-api-access-fmb4t\") pod \"placement-ffffff886-rsf5g\" (UID: \"f9e74f73-675f-46bf-8a70-cd1101995839\") " pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.097790 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-combined-ca-bundle\") pod \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.097846 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-config\") pod \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.098131 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjzbv\" (UniqueName: \"kubernetes.io/projected/46bb9425-0e75-4b58-b0f7-f7ad6998255b-kube-api-access-rjzbv\") pod \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\" (UID: \"46bb9425-0e75-4b58-b0f7-f7ad6998255b\") " Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.126562 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46bb9425-0e75-4b58-b0f7-f7ad6998255b-kube-api-access-rjzbv" (OuterVolumeSpecName: "kube-api-access-rjzbv") pod "46bb9425-0e75-4b58-b0f7-f7ad6998255b" (UID: "46bb9425-0e75-4b58-b0f7-f7ad6998255b"). InnerVolumeSpecName "kube-api-access-rjzbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.130933 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-config" (OuterVolumeSpecName: "config") pod "46bb9425-0e75-4b58-b0f7-f7ad6998255b" (UID: "46bb9425-0e75-4b58-b0f7-f7ad6998255b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.140054 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46bb9425-0e75-4b58-b0f7-f7ad6998255b" (UID: "46bb9425-0e75-4b58-b0f7-f7ad6998255b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.156239 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.200515 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjzbv\" (UniqueName: \"kubernetes.io/projected/46bb9425-0e75-4b58-b0f7-f7ad6998255b-kube-api-access-rjzbv\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.200548 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.200559 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/46bb9425-0e75-4b58-b0f7-f7ad6998255b-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.565467 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfkph" event={"ID":"46bb9425-0e75-4b58-b0f7-f7ad6998255b","Type":"ContainerDied","Data":"afa98c87f1b39c478bb13b85f48adc0eb064885dac4ea7505353fe55352385cf"} Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.565859 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa98c87f1b39c478bb13b85f48adc0eb064885dac4ea7505353fe55352385cf" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.565927 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfkph" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.662104 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ffffff886-rsf5g"] Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.806404 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-5bm8j"] Feb 17 16:24:48 crc kubenswrapper[4874]: E0217 16:24:48.806839 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46bb9425-0e75-4b58-b0f7-f7ad6998255b" containerName="neutron-db-sync" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.806851 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bb9425-0e75-4b58-b0f7-f7ad6998255b" containerName="neutron-db-sync" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.807189 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="46bb9425-0e75-4b58-b0f7-f7ad6998255b" containerName="neutron-db-sync" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.808558 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.827740 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-5bm8j"] Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.896663 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bdc5b79b4-crwsk"] Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.898478 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.903420 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.903692 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lln7c" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.903806 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.903909 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.919359 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.919411 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr9mm\" (UniqueName: \"kubernetes.io/projected/31c188f2-5f85-4364-9a94-795e11aebf64-kube-api-access-vr9mm\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.919451 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.919473 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.919497 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-svc\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.919551 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-config\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:48 crc kubenswrapper[4874]: I0217 16:24:48.920573 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdc5b79b4-crwsk"] Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.021824 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.021903 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-httpd-config\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.021956 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-svc\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.022229 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-config\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.022568 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-config\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.022721 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvltb\" (UniqueName: \"kubernetes.io/projected/020a97a8-7c87-4098-a559-0584c148fbef-kube-api-access-lvltb\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.022931 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.022973 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr9mm\" (UniqueName: \"kubernetes.io/projected/31c188f2-5f85-4364-9a94-795e11aebf64-kube-api-access-vr9mm\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.023021 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-combined-ca-bundle\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.023029 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.023054 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-ovndb-tls-certs\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.023152 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.026748 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-config\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.026862 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-svc\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.027669 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.028310 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.049650 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr9mm\" (UniqueName: \"kubernetes.io/projected/31c188f2-5f85-4364-9a94-795e11aebf64-kube-api-access-vr9mm\") pod \"dnsmasq-dns-6b7b667979-5bm8j\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.125219 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-config\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.125524 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvltb\" (UniqueName: \"kubernetes.io/projected/020a97a8-7c87-4098-a559-0584c148fbef-kube-api-access-lvltb\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.125677 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-combined-ca-bundle\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.125766 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-ovndb-tls-certs\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.125859 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-httpd-config\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.129204 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-ovndb-tls-certs\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.129858 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-httpd-config\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.136102 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-config\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.139818 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-combined-ca-bundle\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.169824 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvltb\" (UniqueName: \"kubernetes.io/projected/020a97a8-7c87-4098-a559-0584c148fbef-kube-api-access-lvltb\") pod \"neutron-5bdc5b79b4-crwsk\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.170625 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.255549 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.649340 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-567c8c9c6c-dn66l" event={"ID":"4a0c2f24-e449-460d-8bcd-269d5ee4994f","Type":"ContainerStarted","Data":"bb7e6c094998fc1391579cbb7215c10447677c4563f0b8121f72e2e531d53ebf"} Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.650194 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.670419 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ffffff886-rsf5g" event={"ID":"f9e74f73-675f-46bf-8a70-cd1101995839","Type":"ContainerStarted","Data":"c237db763ad36d420660662ea2b0f3a5c2d63145f4352f63f0c9e65c4c0c25b5"} Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.670499 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ffffff886-rsf5g" event={"ID":"f9e74f73-675f-46bf-8a70-cd1101995839","Type":"ContainerStarted","Data":"4678f19f34c1d78af7c36b4523ff29187cbc24c289726c17542f8a24b26946ff"} Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.693591 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-567c8c9c6c-dn66l" podStartSLOduration=3.6935724260000002 podStartE2EDuration="3.693572426s" podCreationTimestamp="2026-02-17 16:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:49.681440759 +0000 UTC m=+1299.975829320" watchObservedRunningTime="2026-02-17 16:24:49.693572426 +0000 UTC m=+1299.987960977" Feb 17 16:24:49 crc kubenswrapper[4874]: I0217 16:24:49.966881 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-5bm8j"] Feb 17 16:24:49 crc kubenswrapper[4874]: W0217 16:24:49.979317 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31c188f2_5f85_4364_9a94_795e11aebf64.slice/crio-f32d6516144d43bd91a41b6c8cbdf72207be51902be314b638e32b6f6807d3f9 WatchSource:0}: Error finding container f32d6516144d43bd91a41b6c8cbdf72207be51902be314b638e32b6f6807d3f9: Status 404 returned error can't find the container with id f32d6516144d43bd91a41b6c8cbdf72207be51902be314b638e32b6f6807d3f9 Feb 17 16:24:50 crc kubenswrapper[4874]: I0217 16:24:50.314741 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdc5b79b4-crwsk"] Feb 17 16:24:50 crc kubenswrapper[4874]: W0217 16:24:50.334836 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod020a97a8_7c87_4098_a559_0584c148fbef.slice/crio-aad0256f564138feae9524dddeab8f95f69bb8bae48b3763e50a6ec1cfe6aabc WatchSource:0}: Error finding container aad0256f564138feae9524dddeab8f95f69bb8bae48b3763e50a6ec1cfe6aabc: Status 404 returned error can't find the container with id aad0256f564138feae9524dddeab8f95f69bb8bae48b3763e50a6ec1cfe6aabc Feb 17 16:24:50 crc kubenswrapper[4874]: I0217 16:24:50.706715 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdc5b79b4-crwsk" event={"ID":"020a97a8-7c87-4098-a559-0584c148fbef","Type":"ContainerStarted","Data":"aad0256f564138feae9524dddeab8f95f69bb8bae48b3763e50a6ec1cfe6aabc"} Feb 17 16:24:50 crc kubenswrapper[4874]: I0217 16:24:50.725662 4874 generic.go:334] "Generic (PLEG): container finished" podID="31c188f2-5f85-4364-9a94-795e11aebf64" containerID="7e4081a86b24641e8096d7e0703fe3acb49e4e4dfb43af91b26407f251e25dea" exitCode=0 Feb 17 16:24:50 crc kubenswrapper[4874]: I0217 16:24:50.725716 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" event={"ID":"31c188f2-5f85-4364-9a94-795e11aebf64","Type":"ContainerDied","Data":"7e4081a86b24641e8096d7e0703fe3acb49e4e4dfb43af91b26407f251e25dea"} Feb 17 16:24:50 crc kubenswrapper[4874]: I0217 16:24:50.725739 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" event={"ID":"31c188f2-5f85-4364-9a94-795e11aebf64","Type":"ContainerStarted","Data":"f32d6516144d43bd91a41b6c8cbdf72207be51902be314b638e32b6f6807d3f9"} Feb 17 16:24:50 crc kubenswrapper[4874]: I0217 16:24:50.745637 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ffffff886-rsf5g" event={"ID":"f9e74f73-675f-46bf-8a70-cd1101995839","Type":"ContainerStarted","Data":"f2a6facb549a54cbf193fd3c7bb846e7a23d9bfba76a92931748018266ae2be9"} Feb 17 16:24:50 crc kubenswrapper[4874]: I0217 16:24:50.745689 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:50 crc kubenswrapper[4874]: I0217 16:24:50.745706 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:24:50 crc kubenswrapper[4874]: I0217 16:24:50.852471 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ffffff886-rsf5g" podStartSLOduration=3.852452232 podStartE2EDuration="3.852452232s" podCreationTimestamp="2026-02-17 16:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:50.84174126 +0000 UTC m=+1301.136129821" watchObservedRunningTime="2026-02-17 16:24:50.852452232 +0000 UTC m=+1301.146840793" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.543897 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fccd89f8f-mbtlk"] Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.546166 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.553787 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.554101 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.587937 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fccd89f8f-mbtlk"] Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.637476 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-httpd-config\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.637533 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5v8r\" (UniqueName: \"kubernetes.io/projected/185d59da-e2da-4eec-b721-03f1d211281b-kube-api-access-z5v8r\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.637569 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-combined-ca-bundle\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.637592 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-internal-tls-certs\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.637706 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-public-tls-certs\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.637742 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-ovndb-tls-certs\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.637786 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-config\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.744465 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-httpd-config\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.744553 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5v8r\" (UniqueName: \"kubernetes.io/projected/185d59da-e2da-4eec-b721-03f1d211281b-kube-api-access-z5v8r\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.744585 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-combined-ca-bundle\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.744617 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-internal-tls-certs\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.744736 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-public-tls-certs\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.744807 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-ovndb-tls-certs\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.744871 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-config\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.755532 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-combined-ca-bundle\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.766318 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-config\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.773972 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-internal-tls-certs\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.774788 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5v8r\" (UniqueName: \"kubernetes.io/projected/185d59da-e2da-4eec-b721-03f1d211281b-kube-api-access-z5v8r\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.775122 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdc5b79b4-crwsk" event={"ID":"020a97a8-7c87-4098-a559-0584c148fbef","Type":"ContainerStarted","Data":"f2e33688b30c443d773430732d2e5c8308fe165bbe45e37c812a398e3815bcbc"} Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.775206 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdc5b79b4-crwsk" event={"ID":"020a97a8-7c87-4098-a559-0584c148fbef","Type":"ContainerStarted","Data":"881abe776c2cd1d2fab8953a0b4b3a0b79ac042390adfc97d6a55128a6da4f1f"} Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.776011 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.781532 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-ovndb-tls-certs\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.788429 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" event={"ID":"31c188f2-5f85-4364-9a94-795e11aebf64","Type":"ContainerStarted","Data":"1bf4deee2095849e6b02d91891b3e225bb61d2b61b93ade2472b049c0d35eaa8"} Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.788759 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.792041 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-httpd-config\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.804351 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/185d59da-e2da-4eec-b721-03f1d211281b-public-tls-certs\") pod \"neutron-6fccd89f8f-mbtlk\" (UID: \"185d59da-e2da-4eec-b721-03f1d211281b\") " pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.819541 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bdc5b79b4-crwsk" podStartSLOduration=3.819518272 podStartE2EDuration="3.819518272s" podCreationTimestamp="2026-02-17 16:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:51.80467834 +0000 UTC m=+1302.099066921" watchObservedRunningTime="2026-02-17 16:24:51.819518272 +0000 UTC m=+1302.113906833" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.859131 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" podStartSLOduration=3.85911467 podStartE2EDuration="3.85911467s" podCreationTimestamp="2026-02-17 16:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:51.828987414 +0000 UTC m=+1302.123375995" watchObservedRunningTime="2026-02-17 16:24:51.85911467 +0000 UTC m=+1302.153503231" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.882776 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.946852 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:24:51 crc kubenswrapper[4874]: I0217 16:24:51.947298 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:24:52 crc kubenswrapper[4874]: I0217 16:24:52.028493 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:52 crc kubenswrapper[4874]: I0217 16:24:52.028598 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:24:52 crc kubenswrapper[4874]: I0217 16:24:52.341022 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:24:52 crc kubenswrapper[4874]: I0217 16:24:52.351373 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:52 crc kubenswrapper[4874]: I0217 16:24:52.680051 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fccd89f8f-mbtlk"] Feb 17 16:24:52 crc kubenswrapper[4874]: I0217 16:24:52.804262 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fccd89f8f-mbtlk" event={"ID":"185d59da-e2da-4eec-b721-03f1d211281b","Type":"ContainerStarted","Data":"740ddb921d82037e547f012e14e36a92e718db905fca9ada788eb32c19b8ac60"} Feb 17 16:24:53 crc kubenswrapper[4874]: I0217 16:24:53.825096 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fccd89f8f-mbtlk" event={"ID":"185d59da-e2da-4eec-b721-03f1d211281b","Type":"ContainerStarted","Data":"1e3a4acf7ee74f30b87477f10b5e4ad1707f0b06eb1297d87b0cb7ed46599ebc"} Feb 17 16:24:53 crc kubenswrapper[4874]: I0217 16:24:53.825622 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fccd89f8f-mbtlk" event={"ID":"185d59da-e2da-4eec-b721-03f1d211281b","Type":"ContainerStarted","Data":"1237a36d069eb45711d3a33225b89370c52e20625cb032005dbf610ff10dafee"} Feb 17 16:24:53 crc kubenswrapper[4874]: I0217 16:24:53.825645 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:24:53 crc kubenswrapper[4874]: I0217 16:24:53.829006 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jrg8w" event={"ID":"10d748cd-cbae-4113-bfed-39c4511a879f","Type":"ContainerStarted","Data":"a51a617b00329d7632af1289ae0608922aa9ce80851c1045e700819354462d77"} Feb 17 16:24:53 crc kubenswrapper[4874]: I0217 16:24:53.849316 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fccd89f8f-mbtlk" podStartSLOduration=2.84929896 podStartE2EDuration="2.84929896s" podCreationTimestamp="2026-02-17 16:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:53.845754073 +0000 UTC m=+1304.140142654" watchObservedRunningTime="2026-02-17 16:24:53.84929896 +0000 UTC m=+1304.143687521" Feb 17 16:24:53 crc kubenswrapper[4874]: I0217 16:24:53.868427 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-jrg8w" podStartSLOduration=2.969068972 podStartE2EDuration="44.868406436s" podCreationTimestamp="2026-02-17 16:24:09 +0000 UTC" firstStartedPulling="2026-02-17 16:24:10.882827601 +0000 UTC m=+1261.177216162" lastFinishedPulling="2026-02-17 16:24:52.782165065 +0000 UTC m=+1303.076553626" observedRunningTime="2026-02-17 16:24:53.859117739 +0000 UTC m=+1304.153506310" watchObservedRunningTime="2026-02-17 16:24:53.868406436 +0000 UTC m=+1304.162794997" Feb 17 16:24:57 crc kubenswrapper[4874]: I0217 16:24:57.874247 4874 generic.go:334] "Generic (PLEG): container finished" podID="a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" containerID="77cdcf6bdc0227dfe7b19a34bfd72fddf68434979061a126395ac7d9c23d3534" exitCode=0 Feb 17 16:24:57 crc kubenswrapper[4874]: I0217 16:24:57.874322 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dhtc8" event={"ID":"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c","Type":"ContainerDied","Data":"77cdcf6bdc0227dfe7b19a34bfd72fddf68434979061a126395ac7d9c23d3534"} Feb 17 16:24:59 crc kubenswrapper[4874]: I0217 16:24:59.492281 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:24:59 crc kubenswrapper[4874]: I0217 16:24:59.618643 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wdr7t"] Feb 17 16:24:59 crc kubenswrapper[4874]: I0217 16:24:59.618908 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" podUID="865b8bd3-b179-4e75-a32e-0df273eac5e4" containerName="dnsmasq-dns" containerID="cri-o://cbcb82ad49ece214cc4907d734212a1feed19f20ea36e6626aa160a259b2aaaa" gracePeriod=10 Feb 17 16:24:59 crc kubenswrapper[4874]: I0217 16:24:59.905698 4874 generic.go:334] "Generic (PLEG): container finished" podID="865b8bd3-b179-4e75-a32e-0df273eac5e4" containerID="cbcb82ad49ece214cc4907d734212a1feed19f20ea36e6626aa160a259b2aaaa" exitCode=0 Feb 17 16:24:59 crc kubenswrapper[4874]: I0217 16:24:59.905788 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" event={"ID":"865b8bd3-b179-4e75-a32e-0df273eac5e4","Type":"ContainerDied","Data":"cbcb82ad49ece214cc4907d734212a1feed19f20ea36e6626aa160a259b2aaaa"} Feb 17 16:24:59 crc kubenswrapper[4874]: I0217 16:24:59.911056 4874 generic.go:334] "Generic (PLEG): container finished" podID="96118c9a-6b15-48a8-b6d9-a2146dc0182c" containerID="77ef09ba26fdd2e92436f06fe8cd8993b60b4e40e13de49726732fd41ac660e4" exitCode=0 Feb 17 16:24:59 crc kubenswrapper[4874]: I0217 16:24:59.911130 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k5j4f" event={"ID":"96118c9a-6b15-48a8-b6d9-a2146dc0182c","Type":"ContainerDied","Data":"77ef09ba26fdd2e92436f06fe8cd8993b60b4e40e13de49726732fd41ac660e4"} Feb 17 16:24:59 crc kubenswrapper[4874]: I0217 16:24:59.961385 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" podUID="865b8bd3-b179-4e75-a32e-0df273eac5e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.186:5353: connect: connection refused" Feb 17 16:25:01 crc kubenswrapper[4874]: I0217 16:25:01.937601 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:25:01 crc kubenswrapper[4874]: I0217 16:25:01.947463 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dhtc8" event={"ID":"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c","Type":"ContainerDied","Data":"25ee14ca0ad2658e6ae68fb44e0a6588b6d8863ee131f3bea69c9a6f7774365a"} Feb 17 16:25:01 crc kubenswrapper[4874]: I0217 16:25:01.947501 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ee14ca0ad2658e6ae68fb44e0a6588b6d8863ee131f3bea69c9a6f7774365a" Feb 17 16:25:01 crc kubenswrapper[4874]: I0217 16:25:01.947556 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dhtc8" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.044271 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzzsz\" (UniqueName: \"kubernetes.io/projected/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-kube-api-access-nzzsz\") pod \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.044537 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-db-sync-config-data\") pod \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.044659 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-combined-ca-bundle\") pod \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\" (UID: \"a4a96348-a1c6-4470-ad3a-d87cc20c8d3c\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.053487 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-kube-api-access-nzzsz" (OuterVolumeSpecName: "kube-api-access-nzzsz") pod "a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" (UID: "a4a96348-a1c6-4470-ad3a-d87cc20c8d3c"). InnerVolumeSpecName "kube-api-access-nzzsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.053822 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" (UID: "a4a96348-a1c6-4470-ad3a-d87cc20c8d3c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.103456 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" (UID: "a4a96348-a1c6-4470-ad3a-d87cc20c8d3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.146911 4874 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.146942 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.146951 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzzsz\" (UniqueName: \"kubernetes.io/projected/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c-kube-api-access-nzzsz\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.623434 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k5j4f" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.629808 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.765751 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-combined-ca-bundle\") pod \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.766165 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-svc\") pod \"865b8bd3-b179-4e75-a32e-0df273eac5e4\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.766292 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-config-data\") pod \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.766375 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-config\") pod \"865b8bd3-b179-4e75-a32e-0df273eac5e4\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.766435 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-swift-storage-0\") pod \"865b8bd3-b179-4e75-a32e-0df273eac5e4\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.766473 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-sb\") pod \"865b8bd3-b179-4e75-a32e-0df273eac5e4\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.766518 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94p6b\" (UniqueName: \"kubernetes.io/projected/865b8bd3-b179-4e75-a32e-0df273eac5e4-kube-api-access-94p6b\") pod \"865b8bd3-b179-4e75-a32e-0df273eac5e4\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.766548 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9l9c\" (UniqueName: \"kubernetes.io/projected/96118c9a-6b15-48a8-b6d9-a2146dc0182c-kube-api-access-f9l9c\") pod \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\" (UID: \"96118c9a-6b15-48a8-b6d9-a2146dc0182c\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.766655 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-nb\") pod \"865b8bd3-b179-4e75-a32e-0df273eac5e4\" (UID: \"865b8bd3-b179-4e75-a32e-0df273eac5e4\") " Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.770234 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96118c9a-6b15-48a8-b6d9-a2146dc0182c-kube-api-access-f9l9c" (OuterVolumeSpecName: "kube-api-access-f9l9c") pod "96118c9a-6b15-48a8-b6d9-a2146dc0182c" (UID: "96118c9a-6b15-48a8-b6d9-a2146dc0182c"). InnerVolumeSpecName "kube-api-access-f9l9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.875325 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9l9c\" (UniqueName: \"kubernetes.io/projected/96118c9a-6b15-48a8-b6d9-a2146dc0182c-kube-api-access-f9l9c\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.949840 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96118c9a-6b15-48a8-b6d9-a2146dc0182c" (UID: "96118c9a-6b15-48a8-b6d9-a2146dc0182c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.950539 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-config-data" (OuterVolumeSpecName: "config-data") pod "96118c9a-6b15-48a8-b6d9-a2146dc0182c" (UID: "96118c9a-6b15-48a8-b6d9-a2146dc0182c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.956656 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865b8bd3-b179-4e75-a32e-0df273eac5e4-kube-api-access-94p6b" (OuterVolumeSpecName: "kube-api-access-94p6b") pod "865b8bd3-b179-4e75-a32e-0df273eac5e4" (UID: "865b8bd3-b179-4e75-a32e-0df273eac5e4"). InnerVolumeSpecName "kube-api-access-94p6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.977146 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94p6b\" (UniqueName: \"kubernetes.io/projected/865b8bd3-b179-4e75-a32e-0df273eac5e4-kube-api-access-94p6b\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.977174 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.977184 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96118c9a-6b15-48a8-b6d9-a2146dc0182c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.979866 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-config" (OuterVolumeSpecName: "config") pod "865b8bd3-b179-4e75-a32e-0df273eac5e4" (UID: "865b8bd3-b179-4e75-a32e-0df273eac5e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.982316 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" event={"ID":"865b8bd3-b179-4e75-a32e-0df273eac5e4","Type":"ContainerDied","Data":"023a1a4271634851f7dbf60447f7f4e36eec05b0b76bf49a588778c5c7b476e6"} Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.982362 4874 scope.go:117] "RemoveContainer" containerID="cbcb82ad49ece214cc4907d734212a1feed19f20ea36e6626aa160a259b2aaaa" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.982467 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-wdr7t" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.986812 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k5j4f" event={"ID":"96118c9a-6b15-48a8-b6d9-a2146dc0182c","Type":"ContainerDied","Data":"ddeb04687c203dbcf79ccda521dad1aa8f0eb575f81f570524eea4060b0e273e"} Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.986854 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddeb04687c203dbcf79ccda521dad1aa8f0eb575f81f570524eea4060b0e273e" Feb 17 16:25:02 crc kubenswrapper[4874]: I0217 16:25:02.986973 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k5j4f" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.001198 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "865b8bd3-b179-4e75-a32e-0df273eac5e4" (UID: "865b8bd3-b179-4e75-a32e-0df273eac5e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.001209 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "865b8bd3-b179-4e75-a32e-0df273eac5e4" (UID: "865b8bd3-b179-4e75-a32e-0df273eac5e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.011039 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "865b8bd3-b179-4e75-a32e-0df273eac5e4" (UID: "865b8bd3-b179-4e75-a32e-0df273eac5e4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.076131 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "865b8bd3-b179-4e75-a32e-0df273eac5e4" (UID: "865b8bd3-b179-4e75-a32e-0df273eac5e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.079160 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.079183 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.079196 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.079205 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.079214 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/865b8bd3-b179-4e75-a32e-0df273eac5e4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.110407 4874 scope.go:117] "RemoveContainer" containerID="3d2c921ef76eeb6aa8b1054c759317b6353ff4b478bcb67ea7fc8aa591228e23" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.205636 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-956d89d4-jvtqm"] Feb 17 16:25:03 crc kubenswrapper[4874]: E0217 16:25:03.206132 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865b8bd3-b179-4e75-a32e-0df273eac5e4" containerName="dnsmasq-dns" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.206148 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="865b8bd3-b179-4e75-a32e-0df273eac5e4" containerName="dnsmasq-dns" Feb 17 16:25:03 crc kubenswrapper[4874]: E0217 16:25:03.206165 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" containerName="barbican-db-sync" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.206193 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" containerName="barbican-db-sync" Feb 17 16:25:03 crc kubenswrapper[4874]: E0217 16:25:03.206206 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96118c9a-6b15-48a8-b6d9-a2146dc0182c" containerName="heat-db-sync" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.206213 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="96118c9a-6b15-48a8-b6d9-a2146dc0182c" containerName="heat-db-sync" Feb 17 16:25:03 crc kubenswrapper[4874]: E0217 16:25:03.206238 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865b8bd3-b179-4e75-a32e-0df273eac5e4" containerName="init" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.206244 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="865b8bd3-b179-4e75-a32e-0df273eac5e4" containerName="init" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.206464 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="865b8bd3-b179-4e75-a32e-0df273eac5e4" containerName="dnsmasq-dns" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.206474 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" containerName="barbican-db-sync" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.206488 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="96118c9a-6b15-48a8-b6d9-a2146dc0182c" containerName="heat-db-sync" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.207614 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.211597 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.211763 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.213990 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nnqww" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.221220 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-c7cb8b4bf-4w9ct"] Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.226659 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.230838 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-956d89d4-jvtqm"] Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.231094 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.267203 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c7cb8b4bf-4w9ct"] Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.374994 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pdzxn"] Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.377438 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.384814 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf086f0-8328-440d-b607-66c3db544871-config-data\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.384884 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2gzs\" (UniqueName: \"kubernetes.io/projected/9bf086f0-8328-440d-b607-66c3db544871-kube-api-access-t2gzs\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.384956 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf086f0-8328-440d-b607-66c3db544871-combined-ca-bundle\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.384991 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/955ecefb-40d6-42e2-acd6-133f1ecf251d-config-data-custom\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.385046 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bf086f0-8328-440d-b607-66c3db544871-config-data-custom\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.385117 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5jxz\" (UniqueName: \"kubernetes.io/projected/955ecefb-40d6-42e2-acd6-133f1ecf251d-kube-api-access-z5jxz\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.385190 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955ecefb-40d6-42e2-acd6-133f1ecf251d-combined-ca-bundle\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.385238 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955ecefb-40d6-42e2-acd6-133f1ecf251d-config-data\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.385295 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bf086f0-8328-440d-b607-66c3db544871-logs\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.385394 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/955ecefb-40d6-42e2-acd6-133f1ecf251d-logs\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.402186 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wdr7t"] Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.414212 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-wdr7t"] Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.423237 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pdzxn"] Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487192 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955ecefb-40d6-42e2-acd6-133f1ecf251d-config-data\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487296 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487317 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrx9n\" (UniqueName: \"kubernetes.io/projected/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-kube-api-access-lrx9n\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487365 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bf086f0-8328-440d-b607-66c3db544871-logs\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487407 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-config\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487430 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/955ecefb-40d6-42e2-acd6-133f1ecf251d-logs\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487458 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487530 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf086f0-8328-440d-b607-66c3db544871-config-data\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487547 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487571 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2gzs\" (UniqueName: \"kubernetes.io/projected/9bf086f0-8328-440d-b607-66c3db544871-kube-api-access-t2gzs\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487618 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf086f0-8328-440d-b607-66c3db544871-combined-ca-bundle\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/955ecefb-40d6-42e2-acd6-133f1ecf251d-config-data-custom\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487709 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bf086f0-8328-440d-b607-66c3db544871-config-data-custom\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487780 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5jxz\" (UniqueName: \"kubernetes.io/projected/955ecefb-40d6-42e2-acd6-133f1ecf251d-kube-api-access-z5jxz\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487864 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.487893 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955ecefb-40d6-42e2-acd6-133f1ecf251d-combined-ca-bundle\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.496236 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d8dc74b58-r76lq"] Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.496984 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bf086f0-8328-440d-b607-66c3db544871-logs\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.497135 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/955ecefb-40d6-42e2-acd6-133f1ecf251d-logs\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.497562 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955ecefb-40d6-42e2-acd6-133f1ecf251d-config-data\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.498464 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf086f0-8328-440d-b607-66c3db544871-config-data\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.499137 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.500348 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bf086f0-8328-440d-b607-66c3db544871-config-data-custom\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.505734 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955ecefb-40d6-42e2-acd6-133f1ecf251d-combined-ca-bundle\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.509481 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.520303 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/955ecefb-40d6-42e2-acd6-133f1ecf251d-config-data-custom\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.524131 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2gzs\" (UniqueName: \"kubernetes.io/projected/9bf086f0-8328-440d-b607-66c3db544871-kube-api-access-t2gzs\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.525876 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf086f0-8328-440d-b607-66c3db544871-combined-ca-bundle\") pod \"barbican-keystone-listener-956d89d4-jvtqm\" (UID: \"9bf086f0-8328-440d-b607-66c3db544871\") " pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.537925 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d8dc74b58-r76lq"] Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.544425 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5jxz\" (UniqueName: \"kubernetes.io/projected/955ecefb-40d6-42e2-acd6-133f1ecf251d-kube-api-access-z5jxz\") pod \"barbican-worker-c7cb8b4bf-4w9ct\" (UID: \"955ecefb-40d6-42e2-acd6-133f1ecf251d\") " pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.550746 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.571725 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591228 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591264 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrx9n\" (UniqueName: \"kubernetes.io/projected/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-kube-api-access-lrx9n\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591315 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-config\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591363 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591386 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data-custom\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591420 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-combined-ca-bundle\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591446 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591568 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd4jp\" (UniqueName: \"kubernetes.io/projected/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-kube-api-access-fd4jp\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591593 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.591692 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-logs\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.592547 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.593912 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-config\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.594056 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.594868 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.595270 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.628454 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrx9n\" (UniqueName: \"kubernetes.io/projected/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-kube-api-access-lrx9n\") pod \"dnsmasq-dns-848cf88cfc-pdzxn\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.693675 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd4jp\" (UniqueName: \"kubernetes.io/projected/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-kube-api-access-fd4jp\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.693719 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.693813 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-logs\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.693877 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data-custom\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.693911 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-combined-ca-bundle\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.697512 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-logs\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.697618 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-combined-ca-bundle\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.701049 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.706797 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data-custom\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.716964 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.731932 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd4jp\" (UniqueName: \"kubernetes.io/projected/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-kube-api-access-fd4jp\") pod \"barbican-api-5d8dc74b58-r76lq\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:03 crc kubenswrapper[4874]: I0217 16:25:03.823206 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:04 crc kubenswrapper[4874]: I0217 16:25:04.554229 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="865b8bd3-b179-4e75-a32e-0df273eac5e4" path="/var/lib/kubelet/pods/865b8bd3-b179-4e75-a32e-0df273eac5e4/volumes" Feb 17 16:25:04 crc kubenswrapper[4874]: E0217 16:25:04.653836 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" Feb 17 16:25:04 crc kubenswrapper[4874]: I0217 16:25:04.756761 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pdzxn"] Feb 17 16:25:05 crc kubenswrapper[4874]: I0217 16:25:05.035664 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c7cb8b4bf-4w9ct"] Feb 17 16:25:05 crc kubenswrapper[4874]: W0217 16:25:05.057784 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod955ecefb_40d6_42e2_acd6_133f1ecf251d.slice/crio-81ff20f6a732066ba97e7e84af468db5f0618488505d47412c52b8e98cb2068e WatchSource:0}: Error finding container 81ff20f6a732066ba97e7e84af468db5f0618488505d47412c52b8e98cb2068e: Status 404 returned error can't find the container with id 81ff20f6a732066ba97e7e84af468db5f0618488505d47412c52b8e98cb2068e Feb 17 16:25:05 crc kubenswrapper[4874]: I0217 16:25:05.072161 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0df3ad69-92a9-4a61-9178-619f75dc6f98","Type":"ContainerStarted","Data":"5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d"} Feb 17 16:25:05 crc kubenswrapper[4874]: I0217 16:25:05.072334 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="ceilometer-notification-agent" containerID="cri-o://1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13" gracePeriod=30 Feb 17 16:25:05 crc kubenswrapper[4874]: I0217 16:25:05.072417 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:25:05 crc kubenswrapper[4874]: I0217 16:25:05.072756 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="proxy-httpd" containerID="cri-o://5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d" gracePeriod=30 Feb 17 16:25:05 crc kubenswrapper[4874]: I0217 16:25:05.072802 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="sg-core" containerID="cri-o://f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb" gracePeriod=30 Feb 17 16:25:05 crc kubenswrapper[4874]: I0217 16:25:05.083305 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" event={"ID":"c5d82637-3df7-4e39-bb29-e2fdcf4a7819","Type":"ContainerStarted","Data":"59b8540670975d6c8c289dbddd9c5077127f6efda8074940fad9a8c950197d71"} Feb 17 16:25:05 crc kubenswrapper[4874]: I0217 16:25:05.116349 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-956d89d4-jvtqm"] Feb 17 16:25:05 crc kubenswrapper[4874]: W0217 16:25:05.137449 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bf086f0_8328_440d_b607_66c3db544871.slice/crio-f3ceb8fb3fa3c173173b5dd2644f6fabcb6c2a2262d8ca17224344d9f7467781 WatchSource:0}: Error finding container f3ceb8fb3fa3c173173b5dd2644f6fabcb6c2a2262d8ca17224344d9f7467781: Status 404 returned error can't find the container with id f3ceb8fb3fa3c173173b5dd2644f6fabcb6c2a2262d8ca17224344d9f7467781 Feb 17 16:25:05 crc kubenswrapper[4874]: I0217 16:25:05.158609 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d8dc74b58-r76lq"] Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.096283 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" event={"ID":"955ecefb-40d6-42e2-acd6-133f1ecf251d","Type":"ContainerStarted","Data":"81ff20f6a732066ba97e7e84af468db5f0618488505d47412c52b8e98cb2068e"} Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.100160 4874 generic.go:334] "Generic (PLEG): container finished" podID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerID="5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d" exitCode=0 Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.100182 4874 generic.go:334] "Generic (PLEG): container finished" podID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerID="f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb" exitCode=2 Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.100214 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0df3ad69-92a9-4a61-9178-619f75dc6f98","Type":"ContainerDied","Data":"5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d"} Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.100234 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0df3ad69-92a9-4a61-9178-619f75dc6f98","Type":"ContainerDied","Data":"f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb"} Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.102098 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d8dc74b58-r76lq" event={"ID":"04f7d26a-21c8-4a81-ac37-3c11e3d24ece","Type":"ContainerStarted","Data":"f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2"} Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.102119 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d8dc74b58-r76lq" event={"ID":"04f7d26a-21c8-4a81-ac37-3c11e3d24ece","Type":"ContainerStarted","Data":"f1845d17e4821069b1e33e56b2e8cff874de1a28377872942af9a8826f57f06e"} Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.104515 4874 generic.go:334] "Generic (PLEG): container finished" podID="c5d82637-3df7-4e39-bb29-e2fdcf4a7819" containerID="32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa" exitCode=0 Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.104595 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" event={"ID":"c5d82637-3df7-4e39-bb29-e2fdcf4a7819","Type":"ContainerDied","Data":"32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa"} Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.111914 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" event={"ID":"9bf086f0-8328-440d-b607-66c3db544871","Type":"ContainerStarted","Data":"f3ceb8fb3fa3c173173b5dd2644f6fabcb6c2a2262d8ca17224344d9f7467781"} Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.114628 4874 generic.go:334] "Generic (PLEG): container finished" podID="10d748cd-cbae-4113-bfed-39c4511a879f" containerID="a51a617b00329d7632af1289ae0608922aa9ce80851c1045e700819354462d77" exitCode=0 Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.114650 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jrg8w" event={"ID":"10d748cd-cbae-4113-bfed-39c4511a879f","Type":"ContainerDied","Data":"a51a617b00329d7632af1289ae0608922aa9ce80851c1045e700819354462d77"} Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.380428 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f6fdb9858-5k876"] Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.383631 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.386681 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.387127 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.409733 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f6fdb9858-5k876"] Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.476398 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-combined-ca-bundle\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.476684 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-internal-tls-certs\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.476748 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-config-data-custom\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.476779 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-public-tls-certs\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.476911 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65np5\" (UniqueName: \"kubernetes.io/projected/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-kube-api-access-65np5\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.476950 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-logs\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.477051 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-config-data\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.579621 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-config-data-custom\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.579717 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-public-tls-certs\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.579885 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65np5\" (UniqueName: \"kubernetes.io/projected/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-kube-api-access-65np5\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.579931 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-logs\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.580051 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-config-data\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.580157 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-combined-ca-bundle\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.580192 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-internal-tls-certs\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.581498 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-logs\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.586284 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-config-data-custom\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.586788 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-internal-tls-certs\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.587437 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-config-data\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.587528 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-combined-ca-bundle\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.587542 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-public-tls-certs\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.597002 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65np5\" (UniqueName: \"kubernetes.io/projected/d5e09eec-baf3-4a8f-8d05-95ee094a6c18-kube-api-access-65np5\") pod \"barbican-api-f6fdb9858-5k876\" (UID: \"d5e09eec-baf3-4a8f-8d05-95ee094a6c18\") " pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:06 crc kubenswrapper[4874]: I0217 16:25:06.726080 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.134620 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" event={"ID":"c5d82637-3df7-4e39-bb29-e2fdcf4a7819","Type":"ContainerStarted","Data":"7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467"} Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.134909 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.137063 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d8dc74b58-r76lq" event={"ID":"04f7d26a-21c8-4a81-ac37-3c11e3d24ece","Type":"ContainerStarted","Data":"9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02"} Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.137164 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.137177 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.163556 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" podStartSLOduration=4.163537111 podStartE2EDuration="4.163537111s" podCreationTimestamp="2026-02-17 16:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:07.158709413 +0000 UTC m=+1317.453097984" watchObservedRunningTime="2026-02-17 16:25:07.163537111 +0000 UTC m=+1317.457925672" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.187645 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d8dc74b58-r76lq" podStartSLOduration=4.187623329 podStartE2EDuration="4.187623329s" podCreationTimestamp="2026-02-17 16:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:07.177150734 +0000 UTC m=+1317.471539305" watchObservedRunningTime="2026-02-17 16:25:07.187623329 +0000 UTC m=+1317.482011890" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.521583 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.603484 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10d748cd-cbae-4113-bfed-39c4511a879f-etc-machine-id\") pod \"10d748cd-cbae-4113-bfed-39c4511a879f\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.603775 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-db-sync-config-data\") pod \"10d748cd-cbae-4113-bfed-39c4511a879f\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.603638 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10d748cd-cbae-4113-bfed-39c4511a879f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "10d748cd-cbae-4113-bfed-39c4511a879f" (UID: "10d748cd-cbae-4113-bfed-39c4511a879f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.603841 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-config-data\") pod \"10d748cd-cbae-4113-bfed-39c4511a879f\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.603866 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf2d9\" (UniqueName: \"kubernetes.io/projected/10d748cd-cbae-4113-bfed-39c4511a879f-kube-api-access-nf2d9\") pod \"10d748cd-cbae-4113-bfed-39c4511a879f\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.603933 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-combined-ca-bundle\") pod \"10d748cd-cbae-4113-bfed-39c4511a879f\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.603987 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-scripts\") pod \"10d748cd-cbae-4113-bfed-39c4511a879f\" (UID: \"10d748cd-cbae-4113-bfed-39c4511a879f\") " Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.606550 4874 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10d748cd-cbae-4113-bfed-39c4511a879f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.611841 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "10d748cd-cbae-4113-bfed-39c4511a879f" (UID: "10d748cd-cbae-4113-bfed-39c4511a879f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.612158 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d748cd-cbae-4113-bfed-39c4511a879f-kube-api-access-nf2d9" (OuterVolumeSpecName: "kube-api-access-nf2d9") pod "10d748cd-cbae-4113-bfed-39c4511a879f" (UID: "10d748cd-cbae-4113-bfed-39c4511a879f"). InnerVolumeSpecName "kube-api-access-nf2d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.612745 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-scripts" (OuterVolumeSpecName: "scripts") pod "10d748cd-cbae-4113-bfed-39c4511a879f" (UID: "10d748cd-cbae-4113-bfed-39c4511a879f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.651949 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10d748cd-cbae-4113-bfed-39c4511a879f" (UID: "10d748cd-cbae-4113-bfed-39c4511a879f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.680043 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-config-data" (OuterVolumeSpecName: "config-data") pod "10d748cd-cbae-4113-bfed-39c4511a879f" (UID: "10d748cd-cbae-4113-bfed-39c4511a879f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.708572 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.708607 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf2d9\" (UniqueName: \"kubernetes.io/projected/10d748cd-cbae-4113-bfed-39c4511a879f-kube-api-access-nf2d9\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.708619 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.708628 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.708637 4874 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10d748cd-cbae-4113-bfed-39c4511a879f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:07 crc kubenswrapper[4874]: I0217 16:25:07.743918 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f6fdb9858-5k876"] Feb 17 16:25:07 crc kubenswrapper[4874]: W0217 16:25:07.761907 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e09eec_baf3_4a8f_8d05_95ee094a6c18.slice/crio-b924392ac87e4561c266d122023d7d274ada0d919f1a3d830089d3811dccc6b8 WatchSource:0}: Error finding container b924392ac87e4561c266d122023d7d274ada0d919f1a3d830089d3811dccc6b8: Status 404 returned error can't find the container with id b924392ac87e4561c266d122023d7d274ada0d919f1a3d830089d3811dccc6b8 Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.149334 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" event={"ID":"955ecefb-40d6-42e2-acd6-133f1ecf251d","Type":"ContainerStarted","Data":"4dd0dd27136b9c48f88c26675a9c34638bca4c2ab9fd81ca79ac87541aa65e36"} Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.149651 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" event={"ID":"955ecefb-40d6-42e2-acd6-133f1ecf251d","Type":"ContainerStarted","Data":"981679b634b6bef60d30281d3c055481b039811c0f8c55669e0729d95d4f9860"} Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.152713 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f6fdb9858-5k876" event={"ID":"d5e09eec-baf3-4a8f-8d05-95ee094a6c18","Type":"ContainerStarted","Data":"330527400d0f6317521fcb043cc9d8e816df82733cfdeba209570bcb6da62fa6"} Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.153260 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f6fdb9858-5k876" event={"ID":"d5e09eec-baf3-4a8f-8d05-95ee094a6c18","Type":"ContainerStarted","Data":"b924392ac87e4561c266d122023d7d274ada0d919f1a3d830089d3811dccc6b8"} Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.155640 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" event={"ID":"9bf086f0-8328-440d-b607-66c3db544871","Type":"ContainerStarted","Data":"b141fa3195e7ce02c5f29b98063321d0c2ce0eee952341389c0c34558e723ee4"} Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.155680 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" event={"ID":"9bf086f0-8328-440d-b607-66c3db544871","Type":"ContainerStarted","Data":"2ed801dc50b2e263993275bf0d035552b000c3bf9d9a88c4b1f48b4c6a0ce51b"} Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.158544 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jrg8w" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.161963 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jrg8w" event={"ID":"10d748cd-cbae-4113-bfed-39c4511a879f","Type":"ContainerDied","Data":"6347e70fd421109527d55a2986912873ede499f6f7abb719b6da2bf8698b292e"} Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.162047 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6347e70fd421109527d55a2986912873ede499f6f7abb719b6da2bf8698b292e" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.192924 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-c7cb8b4bf-4w9ct" podStartSLOduration=3.022192413 podStartE2EDuration="5.192904883s" podCreationTimestamp="2026-02-17 16:25:03 +0000 UTC" firstStartedPulling="2026-02-17 16:25:05.064316107 +0000 UTC m=+1315.358704668" lastFinishedPulling="2026-02-17 16:25:07.235028547 +0000 UTC m=+1317.529417138" observedRunningTime="2026-02-17 16:25:08.172846423 +0000 UTC m=+1318.467235004" watchObservedRunningTime="2026-02-17 16:25:08.192904883 +0000 UTC m=+1318.487293444" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.266307 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-956d89d4-jvtqm" podStartSLOduration=3.1872985050000002 podStartE2EDuration="5.266285365s" podCreationTimestamp="2026-02-17 16:25:03 +0000 UTC" firstStartedPulling="2026-02-17 16:25:05.161331127 +0000 UTC m=+1315.455719688" lastFinishedPulling="2026-02-17 16:25:07.240317987 +0000 UTC m=+1317.534706548" observedRunningTime="2026-02-17 16:25:08.198343816 +0000 UTC m=+1318.492732377" watchObservedRunningTime="2026-02-17 16:25:08.266285365 +0000 UTC m=+1318.560673926" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.444504 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:25:08 crc kubenswrapper[4874]: E0217 16:25:08.445196 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d748cd-cbae-4113-bfed-39c4511a879f" containerName="cinder-db-sync" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.445268 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d748cd-cbae-4113-bfed-39c4511a879f" containerName="cinder-db-sync" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.445548 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d748cd-cbae-4113-bfed-39c4511a879f" containerName="cinder-db-sync" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.447207 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.453453 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mz588" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.454631 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.454943 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.455169 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.491681 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.527823 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cqd7\" (UniqueName: \"kubernetes.io/projected/3460c3ca-c89e-4476-a9d4-0a9809b47475-kube-api-access-9cqd7\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.527967 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-scripts\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.528021 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.528116 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3460c3ca-c89e-4476-a9d4-0a9809b47475-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.528168 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.528203 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.581936 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pdzxn"] Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.641487 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.641608 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3460c3ca-c89e-4476-a9d4-0a9809b47475-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.641668 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.641712 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.641761 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cqd7\" (UniqueName: \"kubernetes.io/projected/3460c3ca-c89e-4476-a9d4-0a9809b47475-kube-api-access-9cqd7\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.641857 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-scripts\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.645904 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3460c3ca-c89e-4476-a9d4-0a9809b47475-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.649216 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8f8gg"] Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.651451 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.651843 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.660513 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.660731 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-scripts\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.661390 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.690672 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cqd7\" (UniqueName: \"kubernetes.io/projected/3460c3ca-c89e-4476-a9d4-0a9809b47475-kube-api-access-9cqd7\") pod \"cinder-scheduler-0\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.706165 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8f8gg"] Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.745442 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.745501 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-config\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.745531 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.745605 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.745642 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.745704 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c859r\" (UniqueName: \"kubernetes.io/projected/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-kube-api-access-c859r\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.789172 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.857435 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.857490 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.857584 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c859r\" (UniqueName: \"kubernetes.io/projected/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-kube-api-access-c859r\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.857704 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.857744 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-config\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.857771 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.858694 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.858694 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.859266 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.859419 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.859796 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-config\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.893805 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c859r\" (UniqueName: \"kubernetes.io/projected/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-kube-api-access-c859r\") pod \"dnsmasq-dns-6578955fd5-8f8gg\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.901644 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.906856 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.910447 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:25:08 crc kubenswrapper[4874]: I0217 16:25:08.990156 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.016896 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.066374 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbbx\" (UniqueName: \"kubernetes.io/projected/e9873c7b-f77d-4b3c-a97c-92eec7382335-kube-api-access-5wbbx\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.066446 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.066474 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-scripts\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.066503 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.066562 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9873c7b-f77d-4b3c-a97c-92eec7382335-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.066644 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9873c7b-f77d-4b3c-a97c-92eec7382335-logs\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.066734 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.168408 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9873c7b-f77d-4b3c-a97c-92eec7382335-logs\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.168713 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.168764 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbbx\" (UniqueName: \"kubernetes.io/projected/e9873c7b-f77d-4b3c-a97c-92eec7382335-kube-api-access-5wbbx\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.168806 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.168827 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-scripts\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.168856 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.168901 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9873c7b-f77d-4b3c-a97c-92eec7382335-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.169008 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9873c7b-f77d-4b3c-a97c-92eec7382335-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.170919 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9873c7b-f77d-4b3c-a97c-92eec7382335-logs\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.174412 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.178293 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.182421 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.184633 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" podUID="c5d82637-3df7-4e39-bb29-e2fdcf4a7819" containerName="dnsmasq-dns" containerID="cri-o://7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467" gracePeriod=10 Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.185492 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f6fdb9858-5k876" event={"ID":"d5e09eec-baf3-4a8f-8d05-95ee094a6c18","Type":"ContainerStarted","Data":"e70e061f9d63e223005e161f6cadfbe7ad514e9af80fa0c5e83b1d353441c00f"} Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.185614 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-scripts\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.186145 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.186401 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.193158 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbbx\" (UniqueName: \"kubernetes.io/projected/e9873c7b-f77d-4b3c-a97c-92eec7382335-kube-api-access-5wbbx\") pod \"cinder-api-0\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.238041 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f6fdb9858-5k876" podStartSLOduration=3.23802202 podStartE2EDuration="3.23802202s" podCreationTimestamp="2026-02-17 16:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:09.228931598 +0000 UTC m=+1319.523320169" watchObservedRunningTime="2026-02-17 16:25:09.23802202 +0000 UTC m=+1319.532410581" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.339676 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.580786 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:25:09 crc kubenswrapper[4874]: I0217 16:25:09.704772 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8f8gg"] Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.088753 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:25:10 crc kubenswrapper[4874]: W0217 16:25:10.092581 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9873c7b_f77d_4b3c_a97c_92eec7382335.slice/crio-fbcbcbd522825ab54a52f92f16d4b19794663a9b59888b53f32bd32f3b4577c6 WatchSource:0}: Error finding container fbcbcbd522825ab54a52f92f16d4b19794663a9b59888b53f32bd32f3b4577c6: Status 404 returned error can't find the container with id fbcbcbd522825ab54a52f92f16d4b19794663a9b59888b53f32bd32f3b4577c6 Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.175446 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.213454 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.213778 4874 generic.go:334] "Generic (PLEG): container finished" podID="c5d82637-3df7-4e39-bb29-e2fdcf4a7819" containerID="7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467" exitCode=0 Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.213841 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.213841 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" event={"ID":"c5d82637-3df7-4e39-bb29-e2fdcf4a7819","Type":"ContainerDied","Data":"7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467"} Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.213889 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pdzxn" event={"ID":"c5d82637-3df7-4e39-bb29-e2fdcf4a7819","Type":"ContainerDied","Data":"59b8540670975d6c8c289dbddd9c5077127f6efda8074940fad9a8c950197d71"} Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.213912 4874 scope.go:117] "RemoveContainer" containerID="7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.215661 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3460c3ca-c89e-4476-a9d4-0a9809b47475","Type":"ContainerStarted","Data":"685cdad5557658232d2565c470b1539711c9507fd6a5d9b0d95034873793556e"} Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.219056 4874 generic.go:334] "Generic (PLEG): container finished" podID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerID="1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13" exitCode=0 Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.219128 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0df3ad69-92a9-4a61-9178-619f75dc6f98","Type":"ContainerDied","Data":"1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13"} Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.219152 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0df3ad69-92a9-4a61-9178-619f75dc6f98","Type":"ContainerDied","Data":"e09d0e19ffbf990de6f62028148e62198653aaf8fe68fa32daeba09e0210ebf5"} Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.219210 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.223492 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" event={"ID":"9ef65b51-8db2-4513-89dc-a6ec4c27c22d","Type":"ContainerStarted","Data":"728217a63911f6c67459b3090143d7e51e0ef253238bdf0f569a65b69100150a"} Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.226910 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9873c7b-f77d-4b3c-a97c-92eec7382335","Type":"ContainerStarted","Data":"fbcbcbd522825ab54a52f92f16d4b19794663a9b59888b53f32bd32f3b4577c6"} Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.301753 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-log-httpd\") pod \"0df3ad69-92a9-4a61-9178-619f75dc6f98\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.301830 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-scripts\") pod \"0df3ad69-92a9-4a61-9178-619f75dc6f98\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.301865 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrx9n\" (UniqueName: \"kubernetes.io/projected/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-kube-api-access-lrx9n\") pod \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.301983 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-config\") pod \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.302017 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-svc\") pod \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.302039 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-sb\") pod \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.302127 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-swift-storage-0\") pod \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.302163 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-combined-ca-bundle\") pod \"0df3ad69-92a9-4a61-9178-619f75dc6f98\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.302191 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-sg-core-conf-yaml\") pod \"0df3ad69-92a9-4a61-9178-619f75dc6f98\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.302226 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-run-httpd\") pod \"0df3ad69-92a9-4a61-9178-619f75dc6f98\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.302246 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-config-data\") pod \"0df3ad69-92a9-4a61-9178-619f75dc6f98\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.302348 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-nb\") pod \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\" (UID: \"c5d82637-3df7-4e39-bb29-e2fdcf4a7819\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.302370 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgg4m\" (UniqueName: \"kubernetes.io/projected/0df3ad69-92a9-4a61-9178-619f75dc6f98-kube-api-access-lgg4m\") pod \"0df3ad69-92a9-4a61-9178-619f75dc6f98\" (UID: \"0df3ad69-92a9-4a61-9178-619f75dc6f98\") " Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.303000 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0df3ad69-92a9-4a61-9178-619f75dc6f98" (UID: "0df3ad69-92a9-4a61-9178-619f75dc6f98"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.304782 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0df3ad69-92a9-4a61-9178-619f75dc6f98" (UID: "0df3ad69-92a9-4a61-9178-619f75dc6f98"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.337979 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df3ad69-92a9-4a61-9178-619f75dc6f98-kube-api-access-lgg4m" (OuterVolumeSpecName: "kube-api-access-lgg4m") pod "0df3ad69-92a9-4a61-9178-619f75dc6f98" (UID: "0df3ad69-92a9-4a61-9178-619f75dc6f98"). InnerVolumeSpecName "kube-api-access-lgg4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.338058 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-scripts" (OuterVolumeSpecName: "scripts") pod "0df3ad69-92a9-4a61-9178-619f75dc6f98" (UID: "0df3ad69-92a9-4a61-9178-619f75dc6f98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.338134 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-kube-api-access-lrx9n" (OuterVolumeSpecName: "kube-api-access-lrx9n") pod "c5d82637-3df7-4e39-bb29-e2fdcf4a7819" (UID: "c5d82637-3df7-4e39-bb29-e2fdcf4a7819"). InnerVolumeSpecName "kube-api-access-lrx9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.378968 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-config" (OuterVolumeSpecName: "config") pod "c5d82637-3df7-4e39-bb29-e2fdcf4a7819" (UID: "c5d82637-3df7-4e39-bb29-e2fdcf4a7819"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.382241 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0df3ad69-92a9-4a61-9178-619f75dc6f98" (UID: "0df3ad69-92a9-4a61-9178-619f75dc6f98"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.396435 4874 scope.go:117] "RemoveContainer" containerID="32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.413050 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.413133 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrx9n\" (UniqueName: \"kubernetes.io/projected/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-kube-api-access-lrx9n\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.413153 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.413163 4874 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.413176 4874 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.413192 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgg4m\" (UniqueName: \"kubernetes.io/projected/0df3ad69-92a9-4a61-9178-619f75dc6f98-kube-api-access-lgg4m\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.413208 4874 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0df3ad69-92a9-4a61-9178-619f75dc6f98-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.426595 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5d82637-3df7-4e39-bb29-e2fdcf4a7819" (UID: "c5d82637-3df7-4e39-bb29-e2fdcf4a7819"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.443831 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c5d82637-3df7-4e39-bb29-e2fdcf4a7819" (UID: "c5d82637-3df7-4e39-bb29-e2fdcf4a7819"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.447297 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0df3ad69-92a9-4a61-9178-619f75dc6f98" (UID: "0df3ad69-92a9-4a61-9178-619f75dc6f98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.449691 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5d82637-3df7-4e39-bb29-e2fdcf4a7819" (UID: "c5d82637-3df7-4e39-bb29-e2fdcf4a7819"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.455842 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-config-data" (OuterVolumeSpecName: "config-data") pod "0df3ad69-92a9-4a61-9178-619f75dc6f98" (UID: "0df3ad69-92a9-4a61-9178-619f75dc6f98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.471188 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5d82637-3df7-4e39-bb29-e2fdcf4a7819" (UID: "c5d82637-3df7-4e39-bb29-e2fdcf4a7819"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.500494 4874 scope.go:117] "RemoveContainer" containerID="7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467" Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.504604 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467\": container with ID starting with 7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467 not found: ID does not exist" containerID="7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.504649 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467"} err="failed to get container status \"7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467\": rpc error: code = NotFound desc = could not find container \"7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467\": container with ID starting with 7a92e9f65d8ead907dded31c74166a0fc1918cbda42efd1eb3e01b6df5576467 not found: ID does not exist" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.504672 4874 scope.go:117] "RemoveContainer" containerID="32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa" Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.504999 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa\": container with ID starting with 32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa not found: ID does not exist" containerID="32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.505029 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa"} err="failed to get container status \"32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa\": rpc error: code = NotFound desc = could not find container \"32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa\": container with ID starting with 32a0d64a53b94c84fd75dd7bab6351e7679d3c7ca93b7105fdec6536ddfd9bfa not found: ID does not exist" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.505048 4874 scope.go:117] "RemoveContainer" containerID="5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.515998 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.516036 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.516048 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.516060 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5d82637-3df7-4e39-bb29-e2fdcf4a7819-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.516185 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.516201 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df3ad69-92a9-4a61-9178-619f75dc6f98-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.536352 4874 scope.go:117] "RemoveContainer" containerID="f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.615288 4874 scope.go:117] "RemoveContainer" containerID="1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.618543 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pdzxn"] Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.655541 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pdzxn"] Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.661029 4874 scope.go:117] "RemoveContainer" containerID="5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d" Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.661554 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d\": container with ID starting with 5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d not found: ID does not exist" containerID="5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.661580 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d"} err="failed to get container status \"5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d\": rpc error: code = NotFound desc = could not find container \"5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d\": container with ID starting with 5c66aa078c360d6a0da0fe2eb8dbb71be9cae6a9bfda70e795068df048327f8d not found: ID does not exist" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.661599 4874 scope.go:117] "RemoveContainer" containerID="f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb" Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.661962 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb\": container with ID starting with f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb not found: ID does not exist" containerID="f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.661981 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb"} err="failed to get container status \"f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb\": rpc error: code = NotFound desc = could not find container \"f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb\": container with ID starting with f738dd1a645578b91b2d975aebdda38d5394299b460c3c4ac58328b42ee202fb not found: ID does not exist" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.661994 4874 scope.go:117] "RemoveContainer" containerID="1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13" Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.662896 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13\": container with ID starting with 1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13 not found: ID does not exist" containerID="1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.662917 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13"} err="failed to get container status \"1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13\": rpc error: code = NotFound desc = could not find container \"1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13\": container with ID starting with 1aa7845dee10a9aea6f399dfaacd9209c40fc209718703608e3b44ce3c454a13 not found: ID does not exist" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.690004 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.715656 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727132 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.727545 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="ceilometer-notification-agent" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727564 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="ceilometer-notification-agent" Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.727589 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d82637-3df7-4e39-bb29-e2fdcf4a7819" containerName="init" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727595 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d82637-3df7-4e39-bb29-e2fdcf4a7819" containerName="init" Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.727611 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d82637-3df7-4e39-bb29-e2fdcf4a7819" containerName="dnsmasq-dns" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727617 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d82637-3df7-4e39-bb29-e2fdcf4a7819" containerName="dnsmasq-dns" Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.727630 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="sg-core" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727636 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="sg-core" Feb 17 16:25:10 crc kubenswrapper[4874]: E0217 16:25:10.727665 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="proxy-httpd" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727671 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="proxy-httpd" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727855 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="ceilometer-notification-agent" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727877 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d82637-3df7-4e39-bb29-e2fdcf4a7819" containerName="dnsmasq-dns" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727884 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="proxy-httpd" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.727904 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" containerName="sg-core" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.729804 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.732030 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.734625 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.750415 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.841126 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-scripts\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.841192 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.841266 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lbdv\" (UniqueName: \"kubernetes.io/projected/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-kube-api-access-9lbdv\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.841333 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.841367 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-log-httpd\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.841429 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-run-httpd\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.841520 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-config-data\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.943518 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.943574 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-log-httpd\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.943626 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-run-httpd\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.943691 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-config-data\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.943748 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-scripts\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.943787 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.943829 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lbdv\" (UniqueName: \"kubernetes.io/projected/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-kube-api-access-9lbdv\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.944586 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-log-httpd\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.944946 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-run-httpd\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.949668 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.950233 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-scripts\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.950967 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.951804 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-config-data\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:10 crc kubenswrapper[4874]: I0217 16:25:10.965153 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lbdv\" (UniqueName: \"kubernetes.io/projected/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-kube-api-access-9lbdv\") pod \"ceilometer-0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " pod="openstack/ceilometer-0" Feb 17 16:25:11 crc kubenswrapper[4874]: I0217 16:25:11.056585 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:11 crc kubenswrapper[4874]: I0217 16:25:11.095408 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:25:11 crc kubenswrapper[4874]: I0217 16:25:11.281597 4874 generic.go:334] "Generic (PLEG): container finished" podID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" containerID="77e8cb5312ce901dbec6e06966d68c1d04ee2e5b7c505d96db0c64d610fe62e5" exitCode=0 Feb 17 16:25:11 crc kubenswrapper[4874]: I0217 16:25:11.281856 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" event={"ID":"9ef65b51-8db2-4513-89dc-a6ec4c27c22d","Type":"ContainerDied","Data":"77e8cb5312ce901dbec6e06966d68c1d04ee2e5b7c505d96db0c64d610fe62e5"} Feb 17 16:25:11 crc kubenswrapper[4874]: I0217 16:25:11.292847 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9873c7b-f77d-4b3c-a97c-92eec7382335","Type":"ContainerStarted","Data":"29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5"} Feb 17 16:25:11 crc kubenswrapper[4874]: W0217 16:25:11.569333 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f8675b_8a6e_41dc_8368_a5ad3ff38fd0.slice/crio-a9f7302b0be43452c4f111347cd00bc8490acc641758c68aed6e3f05c533d0a0 WatchSource:0}: Error finding container a9f7302b0be43452c4f111347cd00bc8490acc641758c68aed6e3f05c533d0a0: Status 404 returned error can't find the container with id a9f7302b0be43452c4f111347cd00bc8490acc641758c68aed6e3f05c533d0a0 Feb 17 16:25:11 crc kubenswrapper[4874]: I0217 16:25:11.569523 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.320550 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3460c3ca-c89e-4476-a9d4-0a9809b47475","Type":"ContainerStarted","Data":"7dcbcfbbe5c4651b24326462a324bfe638582c960cca55265f4c5d537b0f7f6f"} Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.320899 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3460c3ca-c89e-4476-a9d4-0a9809b47475","Type":"ContainerStarted","Data":"c8e58b4c4f4851891f187358edc20ef36f780c9dc8fb8d4f488b4caac6a99bc9"} Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.324133 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerStarted","Data":"a9f7302b0be43452c4f111347cd00bc8490acc641758c68aed6e3f05c533d0a0"} Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.337026 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" event={"ID":"9ef65b51-8db2-4513-89dc-a6ec4c27c22d","Type":"ContainerStarted","Data":"a73e1006f82695e15769264553bcea390be8051c12c8c1b4d49dcb59c250ddac"} Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.337066 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.344890 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9873c7b-f77d-4b3c-a97c-92eec7382335","Type":"ContainerStarted","Data":"e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08"} Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.345124 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerName="cinder-api-log" containerID="cri-o://29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5" gracePeriod=30 Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.345246 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.345292 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerName="cinder-api" containerID="cri-o://e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08" gracePeriod=30 Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.345442 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.439073841 podStartE2EDuration="4.345422549s" podCreationTimestamp="2026-02-17 16:25:08 +0000 UTC" firstStartedPulling="2026-02-17 16:25:09.598137546 +0000 UTC m=+1319.892526107" lastFinishedPulling="2026-02-17 16:25:10.504486254 +0000 UTC m=+1320.798874815" observedRunningTime="2026-02-17 16:25:12.342561209 +0000 UTC m=+1322.636949770" watchObservedRunningTime="2026-02-17 16:25:12.345422549 +0000 UTC m=+1322.639811110" Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.381602 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" podStartSLOduration=4.381583462 podStartE2EDuration="4.381583462s" podCreationTimestamp="2026-02-17 16:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:12.372742046 +0000 UTC m=+1322.667130617" watchObservedRunningTime="2026-02-17 16:25:12.381583462 +0000 UTC m=+1322.675972033" Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.410566 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.410545979 podStartE2EDuration="4.410545979s" podCreationTimestamp="2026-02-17 16:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:12.400491744 +0000 UTC m=+1322.694880305" watchObservedRunningTime="2026-02-17 16:25:12.410545979 +0000 UTC m=+1322.704934560" Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.478510 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df3ad69-92a9-4a61-9178-619f75dc6f98" path="/var/lib/kubelet/pods/0df3ad69-92a9-4a61-9178-619f75dc6f98/volumes" Feb 17 16:25:12 crc kubenswrapper[4874]: I0217 16:25:12.483510 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d82637-3df7-4e39-bb29-e2fdcf4a7819" path="/var/lib/kubelet/pods/c5d82637-3df7-4e39-bb29-e2fdcf4a7819/volumes" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.211907 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.296523 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-scripts\") pod \"e9873c7b-f77d-4b3c-a97c-92eec7382335\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.296613 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data\") pod \"e9873c7b-f77d-4b3c-a97c-92eec7382335\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.296676 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wbbx\" (UniqueName: \"kubernetes.io/projected/e9873c7b-f77d-4b3c-a97c-92eec7382335-kube-api-access-5wbbx\") pod \"e9873c7b-f77d-4b3c-a97c-92eec7382335\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.296713 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data-custom\") pod \"e9873c7b-f77d-4b3c-a97c-92eec7382335\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.296729 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9873c7b-f77d-4b3c-a97c-92eec7382335-etc-machine-id\") pod \"e9873c7b-f77d-4b3c-a97c-92eec7382335\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.296791 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-combined-ca-bundle\") pod \"e9873c7b-f77d-4b3c-a97c-92eec7382335\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.296857 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9873c7b-f77d-4b3c-a97c-92eec7382335-logs\") pod \"e9873c7b-f77d-4b3c-a97c-92eec7382335\" (UID: \"e9873c7b-f77d-4b3c-a97c-92eec7382335\") " Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.296906 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9873c7b-f77d-4b3c-a97c-92eec7382335-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e9873c7b-f77d-4b3c-a97c-92eec7382335" (UID: "e9873c7b-f77d-4b3c-a97c-92eec7382335"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.297215 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9873c7b-f77d-4b3c-a97c-92eec7382335-logs" (OuterVolumeSpecName: "logs") pod "e9873c7b-f77d-4b3c-a97c-92eec7382335" (UID: "e9873c7b-f77d-4b3c-a97c-92eec7382335"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.297585 4874 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9873c7b-f77d-4b3c-a97c-92eec7382335-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.297605 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9873c7b-f77d-4b3c-a97c-92eec7382335-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.306291 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9873c7b-f77d-4b3c-a97c-92eec7382335-kube-api-access-5wbbx" (OuterVolumeSpecName: "kube-api-access-5wbbx") pod "e9873c7b-f77d-4b3c-a97c-92eec7382335" (UID: "e9873c7b-f77d-4b3c-a97c-92eec7382335"). InnerVolumeSpecName "kube-api-access-5wbbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.317411 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e9873c7b-f77d-4b3c-a97c-92eec7382335" (UID: "e9873c7b-f77d-4b3c-a97c-92eec7382335"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.319301 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-scripts" (OuterVolumeSpecName: "scripts") pod "e9873c7b-f77d-4b3c-a97c-92eec7382335" (UID: "e9873c7b-f77d-4b3c-a97c-92eec7382335"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.332944 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9873c7b-f77d-4b3c-a97c-92eec7382335" (UID: "e9873c7b-f77d-4b3c-a97c-92eec7382335"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.358104 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data" (OuterVolumeSpecName: "config-data") pod "e9873c7b-f77d-4b3c-a97c-92eec7382335" (UID: "e9873c7b-f77d-4b3c-a97c-92eec7382335"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.369208 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerStarted","Data":"9572c539cf08f40bfa92a72a4e59089ca390c0f8fdcb5665c32e36b88637e7a5"} Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.372396 4874 generic.go:334] "Generic (PLEG): container finished" podID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerID="e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08" exitCode=0 Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.372437 4874 generic.go:334] "Generic (PLEG): container finished" podID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerID="29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5" exitCode=143 Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.373650 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.381429 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9873c7b-f77d-4b3c-a97c-92eec7382335","Type":"ContainerDied","Data":"e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08"} Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.381492 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9873c7b-f77d-4b3c-a97c-92eec7382335","Type":"ContainerDied","Data":"29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5"} Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.381506 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9873c7b-f77d-4b3c-a97c-92eec7382335","Type":"ContainerDied","Data":"fbcbcbd522825ab54a52f92f16d4b19794663a9b59888b53f32bd32f3b4577c6"} Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.381531 4874 scope.go:117] "RemoveContainer" containerID="e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.399839 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.399871 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.399881 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wbbx\" (UniqueName: \"kubernetes.io/projected/e9873c7b-f77d-4b3c-a97c-92eec7382335-kube-api-access-5wbbx\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.399890 4874 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.399898 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9873c7b-f77d-4b3c-a97c-92eec7382335-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.493950 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.516873 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.557050 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:25:13 crc kubenswrapper[4874]: E0217 16:25:13.557572 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerName="cinder-api-log" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.557590 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerName="cinder-api-log" Feb 17 16:25:13 crc kubenswrapper[4874]: E0217 16:25:13.557621 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerName="cinder-api" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.557627 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerName="cinder-api" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.557839 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerName="cinder-api" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.557853 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9873c7b-f77d-4b3c-a97c-92eec7382335" containerName="cinder-api-log" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.559049 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.561136 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.561337 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.561342 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.571685 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.706006 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfzx\" (UniqueName: \"kubernetes.io/projected/57c836de-513c-4aca-956a-73dc02dafce8-kube-api-access-2zfzx\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.706168 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.706193 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c836de-513c-4aca-956a-73dc02dafce8-logs\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.706214 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.706237 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-config-data\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.706260 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.706302 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57c836de-513c-4aca-956a-73dc02dafce8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.706484 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-scripts\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.706511 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-config-data-custom\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.774432 4874 scope.go:117] "RemoveContainer" containerID="29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.789805 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.808264 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-scripts\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.808407 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-config-data-custom\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.809421 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfzx\" (UniqueName: \"kubernetes.io/projected/57c836de-513c-4aca-956a-73dc02dafce8-kube-api-access-2zfzx\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.809608 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.810209 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c836de-513c-4aca-956a-73dc02dafce8-logs\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.810259 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.810302 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-config-data\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.810414 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.810472 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57c836de-513c-4aca-956a-73dc02dafce8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.810711 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57c836de-513c-4aca-956a-73dc02dafce8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.811334 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c836de-513c-4aca-956a-73dc02dafce8-logs\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.817796 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-config-data-custom\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.817820 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.818155 4874 scope.go:117] "RemoveContainer" containerID="e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.818716 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-scripts\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.819720 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: E0217 16:25:13.820642 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08\": container with ID starting with e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08 not found: ID does not exist" containerID="e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.820678 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08"} err="failed to get container status \"e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08\": rpc error: code = NotFound desc = could not find container \"e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08\": container with ID starting with e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08 not found: ID does not exist" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.820698 4874 scope.go:117] "RemoveContainer" containerID="29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5" Feb 17 16:25:13 crc kubenswrapper[4874]: E0217 16:25:13.821470 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5\": container with ID starting with 29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5 not found: ID does not exist" containerID="29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.821610 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5"} err="failed to get container status \"29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5\": rpc error: code = NotFound desc = could not find container \"29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5\": container with ID starting with 29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5 not found: ID does not exist" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.821711 4874 scope.go:117] "RemoveContainer" containerID="e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.821995 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-config-data\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.822039 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c836de-513c-4aca-956a-73dc02dafce8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.822281 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08"} err="failed to get container status \"e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08\": rpc error: code = NotFound desc = could not find container \"e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08\": container with ID starting with e879b25209dd125c99e9efadd707051d89cc3e7ebd2e7ac42a9319e844f25c08 not found: ID does not exist" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.822307 4874 scope.go:117] "RemoveContainer" containerID="29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.823369 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5"} err="failed to get container status \"29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5\": rpc error: code = NotFound desc = could not find container \"29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5\": container with ID starting with 29f10eef2bc5ed640272d3cde361304462de5b54b4b7e2d604b7b542c49891b5 not found: ID does not exist" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.830752 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfzx\" (UniqueName: \"kubernetes.io/projected/57c836de-513c-4aca-956a-73dc02dafce8-kube-api-access-2zfzx\") pod \"cinder-api-0\" (UID: \"57c836de-513c-4aca-956a-73dc02dafce8\") " pod="openstack/cinder-api-0" Feb 17 16:25:13 crc kubenswrapper[4874]: I0217 16:25:13.884683 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:25:14 crc kubenswrapper[4874]: I0217 16:25:14.394164 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerStarted","Data":"da61174cd4463e433a554d53eb5b40f39e0c6bbdd8faccbf4b623f076f538f9b"} Feb 17 16:25:14 crc kubenswrapper[4874]: I0217 16:25:14.444198 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:25:14 crc kubenswrapper[4874]: W0217 16:25:14.447880 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57c836de_513c_4aca_956a_73dc02dafce8.slice/crio-e9bc0c37dec581907b255a7727c0c95f5c867653b1783c6744959fd78cb74e3f WatchSource:0}: Error finding container e9bc0c37dec581907b255a7727c0c95f5c867653b1783c6744959fd78cb74e3f: Status 404 returned error can't find the container with id e9bc0c37dec581907b255a7727c0c95f5c867653b1783c6744959fd78cb74e3f Feb 17 16:25:14 crc kubenswrapper[4874]: I0217 16:25:14.500037 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9873c7b-f77d-4b3c-a97c-92eec7382335" path="/var/lib/kubelet/pods/e9873c7b-f77d-4b3c-a97c-92eec7382335/volumes" Feb 17 16:25:15 crc kubenswrapper[4874]: I0217 16:25:15.292993 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:15 crc kubenswrapper[4874]: I0217 16:25:15.304494 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:15 crc kubenswrapper[4874]: I0217 16:25:15.441612 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57c836de-513c-4aca-956a-73dc02dafce8","Type":"ContainerStarted","Data":"f0e2761b9075ad30b45f9e4986de974129c5ba2f7eeba6f6daf1ebc8ec631485"} Feb 17 16:25:15 crc kubenswrapper[4874]: I0217 16:25:15.441828 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57c836de-513c-4aca-956a-73dc02dafce8","Type":"ContainerStarted","Data":"e9bc0c37dec581907b255a7727c0c95f5c867653b1783c6744959fd78cb74e3f"} Feb 17 16:25:15 crc kubenswrapper[4874]: I0217 16:25:15.447014 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerStarted","Data":"09239b10e4cf2ec51f9128a9d479ad47dadb97f1f9f7017e324303f6f3528e9f"} Feb 17 16:25:16 crc kubenswrapper[4874]: I0217 16:25:16.489692 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:25:16 crc kubenswrapper[4874]: I0217 16:25:16.490714 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57c836de-513c-4aca-956a-73dc02dafce8","Type":"ContainerStarted","Data":"05f357ef1fe4f3eed24a4e0926f9fe3f11c205fb4bcda254f074bed00841bcc2"} Feb 17 16:25:16 crc kubenswrapper[4874]: I0217 16:25:16.504209 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.504184977 podStartE2EDuration="3.504184977s" podCreationTimestamp="2026-02-17 16:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:16.487503799 +0000 UTC m=+1326.781892360" watchObservedRunningTime="2026-02-17 16:25:16.504184977 +0000 UTC m=+1326.798573538" Feb 17 16:25:17 crc kubenswrapper[4874]: I0217 16:25:17.502600 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerStarted","Data":"1dbbca4bbde89258084365d048d7ef0c9ea36ff9ab7f1a3b163bd87b0a130e03"} Feb 17 16:25:17 crc kubenswrapper[4874]: I0217 16:25:17.502988 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:25:17 crc kubenswrapper[4874]: I0217 16:25:17.532802 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.274609 podStartE2EDuration="7.532781371s" podCreationTimestamp="2026-02-17 16:25:10 +0000 UTC" firstStartedPulling="2026-02-17 16:25:11.571963047 +0000 UTC m=+1321.866351608" lastFinishedPulling="2026-02-17 16:25:16.830135418 +0000 UTC m=+1327.124523979" observedRunningTime="2026-02-17 16:25:17.524251812 +0000 UTC m=+1327.818640373" watchObservedRunningTime="2026-02-17 16:25:17.532781371 +0000 UTC m=+1327.827169932" Feb 17 16:25:18 crc kubenswrapper[4874]: I0217 16:25:18.354689 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:18 crc kubenswrapper[4874]: I0217 16:25:18.363246 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f6fdb9858-5k876" Feb 17 16:25:18 crc kubenswrapper[4874]: I0217 16:25:18.477556 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d8dc74b58-r76lq"] Feb 17 16:25:18 crc kubenswrapper[4874]: I0217 16:25:18.477815 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d8dc74b58-r76lq" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api-log" containerID="cri-o://f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2" gracePeriod=30 Feb 17 16:25:18 crc kubenswrapper[4874]: I0217 16:25:18.477916 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d8dc74b58-r76lq" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api" containerID="cri-o://9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02" gracePeriod=30 Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.019523 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.067317 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.087627 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-5bm8j"] Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.088092 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" podUID="31c188f2-5f85-4364-9a94-795e11aebf64" containerName="dnsmasq-dns" containerID="cri-o://1bf4deee2095849e6b02d91891b3e225bb61d2b61b93ade2472b049c0d35eaa8" gracePeriod=10 Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.166276 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.173876 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" podUID="31c188f2-5f85-4364-9a94-795e11aebf64" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.195:5353: connect: connection refused" Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.327564 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.528991 4874 generic.go:334] "Generic (PLEG): container finished" podID="31c188f2-5f85-4364-9a94-795e11aebf64" containerID="1bf4deee2095849e6b02d91891b3e225bb61d2b61b93ade2472b049c0d35eaa8" exitCode=0 Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.529429 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" event={"ID":"31c188f2-5f85-4364-9a94-795e11aebf64","Type":"ContainerDied","Data":"1bf4deee2095849e6b02d91891b3e225bb61d2b61b93ade2472b049c0d35eaa8"} Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.534990 4874 generic.go:334] "Generic (PLEG): container finished" podID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerID="f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2" exitCode=143 Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.535499 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerName="cinder-scheduler" containerID="cri-o://c8e58b4c4f4851891f187358edc20ef36f780c9dc8fb8d4f488b4caac6a99bc9" gracePeriod=30 Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.535910 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d8dc74b58-r76lq" event={"ID":"04f7d26a-21c8-4a81-ac37-3c11e3d24ece","Type":"ContainerDied","Data":"f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2"} Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.536183 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerName="probe" containerID="cri-o://7dcbcfbbe5c4651b24326462a324bfe638582c960cca55265f4c5d537b0f7f6f" gracePeriod=30 Feb 17 16:25:19 crc kubenswrapper[4874]: I0217 16:25:19.968607 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.096999 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-sb\") pod \"31c188f2-5f85-4364-9a94-795e11aebf64\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.097157 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-swift-storage-0\") pod \"31c188f2-5f85-4364-9a94-795e11aebf64\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.097271 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-config\") pod \"31c188f2-5f85-4364-9a94-795e11aebf64\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.097307 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-svc\") pod \"31c188f2-5f85-4364-9a94-795e11aebf64\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.097362 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr9mm\" (UniqueName: \"kubernetes.io/projected/31c188f2-5f85-4364-9a94-795e11aebf64-kube-api-access-vr9mm\") pod \"31c188f2-5f85-4364-9a94-795e11aebf64\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.097418 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-nb\") pod \"31c188f2-5f85-4364-9a94-795e11aebf64\" (UID: \"31c188f2-5f85-4364-9a94-795e11aebf64\") " Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.128387 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c188f2-5f85-4364-9a94-795e11aebf64-kube-api-access-vr9mm" (OuterVolumeSpecName: "kube-api-access-vr9mm") pod "31c188f2-5f85-4364-9a94-795e11aebf64" (UID: "31c188f2-5f85-4364-9a94-795e11aebf64"). InnerVolumeSpecName "kube-api-access-vr9mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.185532 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31c188f2-5f85-4364-9a94-795e11aebf64" (UID: "31c188f2-5f85-4364-9a94-795e11aebf64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.190341 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-567c8c9c6c-dn66l" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.200766 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.200796 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr9mm\" (UniqueName: \"kubernetes.io/projected/31c188f2-5f85-4364-9a94-795e11aebf64-kube-api-access-vr9mm\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.215472 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "31c188f2-5f85-4364-9a94-795e11aebf64" (UID: "31c188f2-5f85-4364-9a94-795e11aebf64"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.217770 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "31c188f2-5f85-4364-9a94-795e11aebf64" (UID: "31c188f2-5f85-4364-9a94-795e11aebf64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.272693 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-config" (OuterVolumeSpecName: "config") pod "31c188f2-5f85-4364-9a94-795e11aebf64" (UID: "31c188f2-5f85-4364-9a94-795e11aebf64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.273409 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "31c188f2-5f85-4364-9a94-795e11aebf64" (UID: "31c188f2-5f85-4364-9a94-795e11aebf64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.307653 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.307701 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.307712 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.307720 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31c188f2-5f85-4364-9a94-795e11aebf64-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.552212 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" event={"ID":"31c188f2-5f85-4364-9a94-795e11aebf64","Type":"ContainerDied","Data":"f32d6516144d43bd91a41b6c8cbdf72207be51902be314b638e32b6f6807d3f9"} Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.552430 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-5bm8j" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.552466 4874 scope.go:117] "RemoveContainer" containerID="1bf4deee2095849e6b02d91891b3e225bb61d2b61b93ade2472b049c0d35eaa8" Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.585931 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-5bm8j"] Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.600867 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-5bm8j"] Feb 17 16:25:20 crc kubenswrapper[4874]: I0217 16:25:20.633979 4874 scope.go:117] "RemoveContainer" containerID="7e4081a86b24641e8096d7e0703fe3acb49e4e4dfb43af91b26407f251e25dea" Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.564318 4874 generic.go:334] "Generic (PLEG): container finished" podID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerID="7dcbcfbbe5c4651b24326462a324bfe638582c960cca55265f4c5d537b0f7f6f" exitCode=0 Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.564535 4874 generic.go:334] "Generic (PLEG): container finished" podID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerID="c8e58b4c4f4851891f187358edc20ef36f780c9dc8fb8d4f488b4caac6a99bc9" exitCode=0 Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.564341 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3460c3ca-c89e-4476-a9d4-0a9809b47475","Type":"ContainerDied","Data":"7dcbcfbbe5c4651b24326462a324bfe638582c960cca55265f4c5d537b0f7f6f"} Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.564591 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3460c3ca-c89e-4476-a9d4-0a9809b47475","Type":"ContainerDied","Data":"c8e58b4c4f4851891f187358edc20ef36f780c9dc8fb8d4f488b4caac6a99bc9"} Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.662339 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d8dc74b58-r76lq" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:59010->10.217.0.201:9311: read: connection reset by peer" Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.662667 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d8dc74b58-r76lq" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:59026->10.217.0.201:9311: read: connection reset by peer" Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.697519 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.753056 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-ffffff886-rsf5g" Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.873389 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.924379 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6fccd89f8f-mbtlk" Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.989788 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5bdc5b79b4-crwsk"] Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.990020 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5bdc5b79b4-crwsk" podUID="020a97a8-7c87-4098-a559-0584c148fbef" containerName="neutron-api" containerID="cri-o://881abe776c2cd1d2fab8953a0b4b3a0b79ac042390adfc97d6a55128a6da4f1f" gracePeriod=30 Feb 17 16:25:21 crc kubenswrapper[4874]: I0217 16:25:21.990428 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5bdc5b79b4-crwsk" podUID="020a97a8-7c87-4098-a559-0584c148fbef" containerName="neutron-httpd" containerID="cri-o://f2e33688b30c443d773430732d2e5c8308fe165bbe45e37c812a398e3815bcbc" gracePeriod=30 Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.052929 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3460c3ca-c89e-4476-a9d4-0a9809b47475-etc-machine-id\") pod \"3460c3ca-c89e-4476-a9d4-0a9809b47475\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.053047 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3460c3ca-c89e-4476-a9d4-0a9809b47475-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3460c3ca-c89e-4476-a9d4-0a9809b47475" (UID: "3460c3ca-c89e-4476-a9d4-0a9809b47475"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.053367 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data\") pod \"3460c3ca-c89e-4476-a9d4-0a9809b47475\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.053517 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-scripts\") pod \"3460c3ca-c89e-4476-a9d4-0a9809b47475\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.053652 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data-custom\") pod \"3460c3ca-c89e-4476-a9d4-0a9809b47475\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.053809 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cqd7\" (UniqueName: \"kubernetes.io/projected/3460c3ca-c89e-4476-a9d4-0a9809b47475-kube-api-access-9cqd7\") pod \"3460c3ca-c89e-4476-a9d4-0a9809b47475\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.054013 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-combined-ca-bundle\") pod \"3460c3ca-c89e-4476-a9d4-0a9809b47475\" (UID: \"3460c3ca-c89e-4476-a9d4-0a9809b47475\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.054738 4874 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3460c3ca-c89e-4476-a9d4-0a9809b47475-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.067330 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-scripts" (OuterVolumeSpecName: "scripts") pod "3460c3ca-c89e-4476-a9d4-0a9809b47475" (UID: "3460c3ca-c89e-4476-a9d4-0a9809b47475"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.068558 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3460c3ca-c89e-4476-a9d4-0a9809b47475" (UID: "3460c3ca-c89e-4476-a9d4-0a9809b47475"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.096379 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3460c3ca-c89e-4476-a9d4-0a9809b47475-kube-api-access-9cqd7" (OuterVolumeSpecName: "kube-api-access-9cqd7") pod "3460c3ca-c89e-4476-a9d4-0a9809b47475" (UID: "3460c3ca-c89e-4476-a9d4-0a9809b47475"). InnerVolumeSpecName "kube-api-access-9cqd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.121538 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3460c3ca-c89e-4476-a9d4-0a9809b47475" (UID: "3460c3ca-c89e-4476-a9d4-0a9809b47475"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.157295 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.157332 4874 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.157345 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cqd7\" (UniqueName: \"kubernetes.io/projected/3460c3ca-c89e-4476-a9d4-0a9809b47475-kube-api-access-9cqd7\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.157356 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.199558 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data" (OuterVolumeSpecName: "config-data") pod "3460c3ca-c89e-4476-a9d4-0a9809b47475" (UID: "3460c3ca-c89e-4476-a9d4-0a9809b47475"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.264053 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3460c3ca-c89e-4476-a9d4-0a9809b47475-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.396583 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.500168 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c188f2-5f85-4364-9a94-795e11aebf64" path="/var/lib/kubelet/pods/31c188f2-5f85-4364-9a94-795e11aebf64/volumes" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.573315 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-combined-ca-bundle\") pod \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.573461 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd4jp\" (UniqueName: \"kubernetes.io/projected/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-kube-api-access-fd4jp\") pod \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.573531 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-logs\") pod \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.573571 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data-custom\") pod \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.573691 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data\") pod \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\" (UID: \"04f7d26a-21c8-4a81-ac37-3c11e3d24ece\") " Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.577197 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-logs" (OuterVolumeSpecName: "logs") pod "04f7d26a-21c8-4a81-ac37-3c11e3d24ece" (UID: "04f7d26a-21c8-4a81-ac37-3c11e3d24ece"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.579726 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04f7d26a-21c8-4a81-ac37-3c11e3d24ece" (UID: "04f7d26a-21c8-4a81-ac37-3c11e3d24ece"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.582001 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-kube-api-access-fd4jp" (OuterVolumeSpecName: "kube-api-access-fd4jp") pod "04f7d26a-21c8-4a81-ac37-3c11e3d24ece" (UID: "04f7d26a-21c8-4a81-ac37-3c11e3d24ece"). InnerVolumeSpecName "kube-api-access-fd4jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.618422 4874 generic.go:334] "Generic (PLEG): container finished" podID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerID="9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02" exitCode=0 Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.618490 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d8dc74b58-r76lq" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.618536 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d8dc74b58-r76lq" event={"ID":"04f7d26a-21c8-4a81-ac37-3c11e3d24ece","Type":"ContainerDied","Data":"9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02"} Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.618568 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d8dc74b58-r76lq" event={"ID":"04f7d26a-21c8-4a81-ac37-3c11e3d24ece","Type":"ContainerDied","Data":"f1845d17e4821069b1e33e56b2e8cff874de1a28377872942af9a8826f57f06e"} Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.618586 4874 scope.go:117] "RemoveContainer" containerID="9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.622690 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04f7d26a-21c8-4a81-ac37-3c11e3d24ece" (UID: "04f7d26a-21c8-4a81-ac37-3c11e3d24ece"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.630368 4874 generic.go:334] "Generic (PLEG): container finished" podID="020a97a8-7c87-4098-a559-0584c148fbef" containerID="f2e33688b30c443d773430732d2e5c8308fe165bbe45e37c812a398e3815bcbc" exitCode=0 Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.630439 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdc5b79b4-crwsk" event={"ID":"020a97a8-7c87-4098-a559-0584c148fbef","Type":"ContainerDied","Data":"f2e33688b30c443d773430732d2e5c8308fe165bbe45e37c812a398e3815bcbc"} Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.642312 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.642804 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3460c3ca-c89e-4476-a9d4-0a9809b47475","Type":"ContainerDied","Data":"685cdad5557658232d2565c470b1539711c9507fd6a5d9b0d95034873793556e"} Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.655064 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data" (OuterVolumeSpecName: "config-data") pod "04f7d26a-21c8-4a81-ac37-3c11e3d24ece" (UID: "04f7d26a-21c8-4a81-ac37-3c11e3d24ece"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.676277 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.676311 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd4jp\" (UniqueName: \"kubernetes.io/projected/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-kube-api-access-fd4jp\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.676325 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.676337 4874 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.676345 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f7d26a-21c8-4a81-ac37-3c11e3d24ece-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.677533 4874 scope.go:117] "RemoveContainer" containerID="f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.682731 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.702102 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.730131 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:25:22 crc kubenswrapper[4874]: E0217 16:25:22.730841 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c188f2-5f85-4364-9a94-795e11aebf64" containerName="init" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.730857 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c188f2-5f85-4364-9a94-795e11aebf64" containerName="init" Feb 17 16:25:22 crc kubenswrapper[4874]: E0217 16:25:22.730866 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api-log" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.730872 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api-log" Feb 17 16:25:22 crc kubenswrapper[4874]: E0217 16:25:22.730887 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.730893 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api" Feb 17 16:25:22 crc kubenswrapper[4874]: E0217 16:25:22.730912 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerName="probe" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.730918 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerName="probe" Feb 17 16:25:22 crc kubenswrapper[4874]: E0217 16:25:22.730936 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c188f2-5f85-4364-9a94-795e11aebf64" containerName="dnsmasq-dns" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.730942 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c188f2-5f85-4364-9a94-795e11aebf64" containerName="dnsmasq-dns" Feb 17 16:25:22 crc kubenswrapper[4874]: E0217 16:25:22.730971 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerName="cinder-scheduler" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.730977 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerName="cinder-scheduler" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.731194 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.731212 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerName="probe" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.731222 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3460c3ca-c89e-4476-a9d4-0a9809b47475" containerName="cinder-scheduler" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.731232 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" containerName="barbican-api-log" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.731245 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c188f2-5f85-4364-9a94-795e11aebf64" containerName="dnsmasq-dns" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.732333 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.735376 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.736739 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.737957 4874 scope.go:117] "RemoveContainer" containerID="9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02" Feb 17 16:25:22 crc kubenswrapper[4874]: E0217 16:25:22.738509 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02\": container with ID starting with 9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02 not found: ID does not exist" containerID="9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.738529 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02"} err="failed to get container status \"9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02\": rpc error: code = NotFound desc = could not find container \"9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02\": container with ID starting with 9929f61397a4283c5fff5e6da228c2724d7e7101a0337081b794c787bc73ed02 not found: ID does not exist" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.738548 4874 scope.go:117] "RemoveContainer" containerID="f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2" Feb 17 16:25:22 crc kubenswrapper[4874]: E0217 16:25:22.742452 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2\": container with ID starting with f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2 not found: ID does not exist" containerID="f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.742496 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2"} err="failed to get container status \"f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2\": rpc error: code = NotFound desc = could not find container \"f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2\": container with ID starting with f6905b1e8bbe14b693946a6e724f662d09a69baabb23fa1f8300291ba64f7db2 not found: ID does not exist" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.742510 4874 scope.go:117] "RemoveContainer" containerID="7dcbcfbbe5c4651b24326462a324bfe638582c960cca55265f4c5d537b0f7f6f" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.778243 4874 scope.go:117] "RemoveContainer" containerID="c8e58b4c4f4851891f187358edc20ef36f780c9dc8fb8d4f488b4caac6a99bc9" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.886020 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.886171 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-config-data\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.886207 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbhl\" (UniqueName: \"kubernetes.io/projected/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-kube-api-access-jkbhl\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.886290 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.886388 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-scripts\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.886456 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.979557 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d8dc74b58-r76lq"] Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.988543 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-config-data\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.988599 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbhl\" (UniqueName: \"kubernetes.io/projected/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-kube-api-access-jkbhl\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.988659 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.988715 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-scripts\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.988744 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.988835 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.988827 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.992945 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-scripts\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.996564 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:22 crc kubenswrapper[4874]: I0217 16:25:22.997308 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-config-data\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.000666 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.006552 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d8dc74b58-r76lq"] Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.010505 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbhl\" (UniqueName: \"kubernetes.io/projected/76e7d623-3e9d-43fb-9413-5bb3b1b2aa33-kube-api-access-jkbhl\") pod \"cinder-scheduler-0\" (UID: \"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33\") " pod="openstack/cinder-scheduler-0" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.058490 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.247131 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.249122 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.252683 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-nrgcq" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.253523 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.253754 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.259894 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.402715 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.403011 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9s2\" (UniqueName: \"kubernetes.io/projected/39653e8c-f92f-4783-b439-150dcdb1a8a3-kube-api-access-df9s2\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.403135 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.403215 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.505593 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.505764 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df9s2\" (UniqueName: \"kubernetes.io/projected/39653e8c-f92f-4783-b439-150dcdb1a8a3-kube-api-access-df9s2\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.505831 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.505881 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.506456 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.510802 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.514469 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.523317 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df9s2\" (UniqueName: \"kubernetes.io/projected/39653e8c-f92f-4783-b439-150dcdb1a8a3-kube-api-access-df9s2\") pod \"openstackclient\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.603510 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.608634 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.622890 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.663450 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.665183 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.679272 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.696145 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:25:23 crc kubenswrapper[4874]: W0217 16:25:23.704380 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76e7d623_3e9d_43fb_9413_5bb3b1b2aa33.slice/crio-5f83224f52a66204ebaeb96b1fc206c5efdfd295cff8f2aba344ecfdac6bcf00 WatchSource:0}: Error finding container 5f83224f52a66204ebaeb96b1fc206c5efdfd295cff8f2aba344ecfdac6bcf00: Status 404 returned error can't find the container with id 5f83224f52a66204ebaeb96b1fc206c5efdfd295cff8f2aba344ecfdac6bcf00 Feb 17 16:25:23 crc kubenswrapper[4874]: E0217 16:25:23.788058 4874 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 16:25:23 crc kubenswrapper[4874]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_39653e8c-f92f-4783-b439-150dcdb1a8a3_0(86dc3c2d8e381a15c62724201602a6de694a814e97d2f1caa6fe8203cc991eb9): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"86dc3c2d8e381a15c62724201602a6de694a814e97d2f1caa6fe8203cc991eb9" Netns:"/var/run/netns/b78ef955-00bf-40bb-bc62-1350598d55ef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=86dc3c2d8e381a15c62724201602a6de694a814e97d2f1caa6fe8203cc991eb9;K8S_POD_UID=39653e8c-f92f-4783-b439-150dcdb1a8a3" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/39653e8c-f92f-4783-b439-150dcdb1a8a3]: expected pod UID "39653e8c-f92f-4783-b439-150dcdb1a8a3" but got "ad509da0-c1a5-4dee-828c-783853098ee5" from Kube API Feb 17 16:25:23 crc kubenswrapper[4874]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:25:23 crc kubenswrapper[4874]: > Feb 17 16:25:23 crc kubenswrapper[4874]: E0217 16:25:23.788135 4874 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 16:25:23 crc kubenswrapper[4874]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_39653e8c-f92f-4783-b439-150dcdb1a8a3_0(86dc3c2d8e381a15c62724201602a6de694a814e97d2f1caa6fe8203cc991eb9): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"86dc3c2d8e381a15c62724201602a6de694a814e97d2f1caa6fe8203cc991eb9" Netns:"/var/run/netns/b78ef955-00bf-40bb-bc62-1350598d55ef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=86dc3c2d8e381a15c62724201602a6de694a814e97d2f1caa6fe8203cc991eb9;K8S_POD_UID=39653e8c-f92f-4783-b439-150dcdb1a8a3" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/39653e8c-f92f-4783-b439-150dcdb1a8a3]: expected pod UID "39653e8c-f92f-4783-b439-150dcdb1a8a3" but got "ad509da0-c1a5-4dee-828c-783853098ee5" from Kube API Feb 17 16:25:23 crc kubenswrapper[4874]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:25:23 crc kubenswrapper[4874]: > pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.812939 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ad509da0-c1a5-4dee-828c-783853098ee5-openstack-config-secret\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.813034 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad509da0-c1a5-4dee-828c-783853098ee5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.813067 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mzhm\" (UniqueName: \"kubernetes.io/projected/ad509da0-c1a5-4dee-828c-783853098ee5-kube-api-access-7mzhm\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.813188 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ad509da0-c1a5-4dee-828c-783853098ee5-openstack-config\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.915679 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ad509da0-c1a5-4dee-828c-783853098ee5-openstack-config\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.915995 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ad509da0-c1a5-4dee-828c-783853098ee5-openstack-config-secret\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.916226 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad509da0-c1a5-4dee-828c-783853098ee5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.916362 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mzhm\" (UniqueName: \"kubernetes.io/projected/ad509da0-c1a5-4dee-828c-783853098ee5-kube-api-access-7mzhm\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.917731 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ad509da0-c1a5-4dee-828c-783853098ee5-openstack-config\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.922182 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad509da0-c1a5-4dee-828c-783853098ee5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.927407 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ad509da0-c1a5-4dee-828c-783853098ee5-openstack-config-secret\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:23 crc kubenswrapper[4874]: I0217 16:25:23.934203 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mzhm\" (UniqueName: \"kubernetes.io/projected/ad509da0-c1a5-4dee-828c-783853098ee5-kube-api-access-7mzhm\") pod \"openstackclient\" (UID: \"ad509da0-c1a5-4dee-828c-783853098ee5\") " pod="openstack/openstackclient" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.046649 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.470991 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f7d26a-21c8-4a81-ac37-3c11e3d24ece" path="/var/lib/kubelet/pods/04f7d26a-21c8-4a81-ac37-3c11e3d24ece/volumes" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.472395 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3460c3ca-c89e-4476-a9d4-0a9809b47475" path="/var/lib/kubelet/pods/3460c3ca-c89e-4476-a9d4-0a9809b47475/volumes" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.590104 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.722493 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ad509da0-c1a5-4dee-828c-783853098ee5","Type":"ContainerStarted","Data":"50f1d6cd3558676f6f3a52349dbfce705e61ecbc2cc2914c7db93f847cc5d004"} Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.725019 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.725069 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33","Type":"ContainerStarted","Data":"86708991a3fab1ba9836f7b4471d3c2d68bced08a6d0accfd4d0bd1340147373"} Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.725127 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33","Type":"ContainerStarted","Data":"5f83224f52a66204ebaeb96b1fc206c5efdfd295cff8f2aba344ecfdac6bcf00"} Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.737365 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.742419 4874 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="39653e8c-f92f-4783-b439-150dcdb1a8a3" podUID="ad509da0-c1a5-4dee-828c-783853098ee5" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.840596 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-combined-ca-bundle\") pod \"39653e8c-f92f-4783-b439-150dcdb1a8a3\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.840979 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config-secret\") pod \"39653e8c-f92f-4783-b439-150dcdb1a8a3\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.841102 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df9s2\" (UniqueName: \"kubernetes.io/projected/39653e8c-f92f-4783-b439-150dcdb1a8a3-kube-api-access-df9s2\") pod \"39653e8c-f92f-4783-b439-150dcdb1a8a3\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.841362 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config\") pod \"39653e8c-f92f-4783-b439-150dcdb1a8a3\" (UID: \"39653e8c-f92f-4783-b439-150dcdb1a8a3\") " Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.841947 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "39653e8c-f92f-4783-b439-150dcdb1a8a3" (UID: "39653e8c-f92f-4783-b439-150dcdb1a8a3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.845295 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "39653e8c-f92f-4783-b439-150dcdb1a8a3" (UID: "39653e8c-f92f-4783-b439-150dcdb1a8a3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.846973 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39653e8c-f92f-4783-b439-150dcdb1a8a3" (UID: "39653e8c-f92f-4783-b439-150dcdb1a8a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.847240 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39653e8c-f92f-4783-b439-150dcdb1a8a3-kube-api-access-df9s2" (OuterVolumeSpecName: "kube-api-access-df9s2") pod "39653e8c-f92f-4783-b439-150dcdb1a8a3" (UID: "39653e8c-f92f-4783-b439-150dcdb1a8a3"). InnerVolumeSpecName "kube-api-access-df9s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.943623 4874 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.943659 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.943672 4874 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/39653e8c-f92f-4783-b439-150dcdb1a8a3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4874]: I0217 16:25:24.943683 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df9s2\" (UniqueName: \"kubernetes.io/projected/39653e8c-f92f-4783-b439-150dcdb1a8a3-kube-api-access-df9s2\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:25 crc kubenswrapper[4874]: I0217 16:25:25.740980 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:25:25 crc kubenswrapper[4874]: I0217 16:25:25.741687 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76e7d623-3e9d-43fb-9413-5bb3b1b2aa33","Type":"ContainerStarted","Data":"0a2a5d41d7c98915ab22e6ee393f6db9d6bc77f5b9850ca0ceeac46d817a744e"} Feb 17 16:25:25 crc kubenswrapper[4874]: I0217 16:25:25.763409 4874 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="39653e8c-f92f-4783-b439-150dcdb1a8a3" podUID="ad509da0-c1a5-4dee-828c-783853098ee5" Feb 17 16:25:25 crc kubenswrapper[4874]: I0217 16:25:25.774215 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.774190147 podStartE2EDuration="3.774190147s" podCreationTimestamp="2026-02-17 16:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:25.756960156 +0000 UTC m=+1336.051348727" watchObservedRunningTime="2026-02-17 16:25:25.774190147 +0000 UTC m=+1336.068578738" Feb 17 16:25:26 crc kubenswrapper[4874]: I0217 16:25:26.476880 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39653e8c-f92f-4783-b439-150dcdb1a8a3" path="/var/lib/kubelet/pods/39653e8c-f92f-4783-b439-150dcdb1a8a3/volumes" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.768367 4874 generic.go:334] "Generic (PLEG): container finished" podID="020a97a8-7c87-4098-a559-0584c148fbef" containerID="881abe776c2cd1d2fab8953a0b4b3a0b79ac042390adfc97d6a55128a6da4f1f" exitCode=0 Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.768788 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdc5b79b4-crwsk" event={"ID":"020a97a8-7c87-4098-a559-0584c148fbef","Type":"ContainerDied","Data":"881abe776c2cd1d2fab8953a0b4b3a0b79ac042390adfc97d6a55128a6da4f1f"} Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.800879 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-77965974bf-qbtfj"] Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.802609 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.809500 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-qsm84" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.809799 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.811442 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.811509 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-combined-ca-bundle\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.811630 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data-custom\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.811657 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gdsh\" (UniqueName: \"kubernetes.io/projected/ce22ccd7-e053-4795-bf35-e1021cfeff9d-kube-api-access-2gdsh\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.816089 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.866415 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-77965974bf-qbtfj"] Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.903199 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="57c836de-513c-4aca-956a-73dc02dafce8" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.207:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.913959 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data-custom\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.914034 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gdsh\" (UniqueName: \"kubernetes.io/projected/ce22ccd7-e053-4795-bf35-e1021cfeff9d-kube-api-access-2gdsh\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.914103 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.914222 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-combined-ca-bundle\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.944404 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.969063 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-combined-ca-bundle\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.973739 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data-custom\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:27 crc kubenswrapper[4874]: I0217 16:25:27.973805 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gdsh\" (UniqueName: \"kubernetes.io/projected/ce22ccd7-e053-4795-bf35-e1021cfeff9d-kube-api-access-2gdsh\") pod \"heat-engine-77965974bf-qbtfj\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.047465 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-wjgkl"] Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.076507 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.077681 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.100891 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-wjgkl"] Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.124412 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.124514 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-config\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.124532 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.124622 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.124660 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.124697 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5cg\" (UniqueName: \"kubernetes.io/projected/d3283562-95fd-4595-932e-cf95b3bdd769-kube-api-access-wh5cg\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.158725 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.198144 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78b7864799-6ls5l"] Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.199736 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.207216 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.228584 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-config\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.228636 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.228744 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.228805 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.228864 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh5cg\" (UniqueName: \"kubernetes.io/projected/d3283562-95fd-4595-932e-cf95b3bdd769-kube-api-access-wh5cg\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.228973 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.229922 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.237937 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-config\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.238469 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.238978 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.239235 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.266991 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78b7864799-6ls5l"] Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.300919 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh5cg\" (UniqueName: \"kubernetes.io/projected/d3283562-95fd-4595-932e-cf95b3bdd769-kube-api-access-wh5cg\") pod \"dnsmasq-dns-688b9f5b49-wjgkl\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.330143 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-698669dc7f-2q88l"] Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.331515 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.331540 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data-custom\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.331652 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwhhd\" (UniqueName: \"kubernetes.io/projected/fb7283b1-4828-4a90-bdd2-6861b7d6475b-kube-api-access-fwhhd\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.331669 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-combined-ca-bundle\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.332050 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.341992 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-698669dc7f-2q88l"] Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.351467 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.443618 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.446487 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwhhd\" (UniqueName: \"kubernetes.io/projected/fb7283b1-4828-4a90-bdd2-6861b7d6475b-kube-api-access-fwhhd\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.446548 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-combined-ca-bundle\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.446733 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-combined-ca-bundle\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.446790 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.446889 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data-custom\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.446989 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xbdq\" (UniqueName: \"kubernetes.io/projected/99a67b9d-37fa-411f-bfbe-321623f5d8fb-kube-api-access-6xbdq\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.447052 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.447085 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data-custom\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.458316 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.458484 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data-custom\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.462768 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-combined-ca-bundle\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.489832 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwhhd\" (UniqueName: \"kubernetes.io/projected/fb7283b1-4828-4a90-bdd2-6861b7d6475b-kube-api-access-fwhhd\") pod \"heat-api-78b7864799-6ls5l\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.549541 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-combined-ca-bundle\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.549598 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.549646 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data-custom\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.549696 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xbdq\" (UniqueName: \"kubernetes.io/projected/99a67b9d-37fa-411f-bfbe-321623f5d8fb-kube-api-access-6xbdq\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.553423 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-combined-ca-bundle\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.555003 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.556935 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data-custom\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.570837 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xbdq\" (UniqueName: \"kubernetes.io/projected/99a67b9d-37fa-411f-bfbe-321623f5d8fb-kube-api-access-6xbdq\") pod \"heat-cfnapi-698669dc7f-2q88l\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.586699 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.719648 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.800808 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdc5b79b4-crwsk" event={"ID":"020a97a8-7c87-4098-a559-0584c148fbef","Type":"ContainerDied","Data":"aad0256f564138feae9524dddeab8f95f69bb8bae48b3763e50a6ec1cfe6aabc"} Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.800849 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad0256f564138feae9524dddeab8f95f69bb8bae48b3763e50a6ec1cfe6aabc" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.843613 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.880987 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-ovndb-tls-certs\") pod \"020a97a8-7c87-4098-a559-0584c148fbef\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.881063 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-config\") pod \"020a97a8-7c87-4098-a559-0584c148fbef\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.881106 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvltb\" (UniqueName: \"kubernetes.io/projected/020a97a8-7c87-4098-a559-0584c148fbef-kube-api-access-lvltb\") pod \"020a97a8-7c87-4098-a559-0584c148fbef\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.881796 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-combined-ca-bundle\") pod \"020a97a8-7c87-4098-a559-0584c148fbef\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.881910 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-httpd-config\") pod \"020a97a8-7c87-4098-a559-0584c148fbef\" (UID: \"020a97a8-7c87-4098-a559-0584c148fbef\") " Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.887307 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "020a97a8-7c87-4098-a559-0584c148fbef" (UID: "020a97a8-7c87-4098-a559-0584c148fbef"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.887426 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.909249 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020a97a8-7c87-4098-a559-0584c148fbef-kube-api-access-lvltb" (OuterVolumeSpecName: "kube-api-access-lvltb") pod "020a97a8-7c87-4098-a559-0584c148fbef" (UID: "020a97a8-7c87-4098-a559-0584c148fbef"). InnerVolumeSpecName "kube-api-access-lvltb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.989864 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvltb\" (UniqueName: \"kubernetes.io/projected/020a97a8-7c87-4098-a559-0584c148fbef-kube-api-access-lvltb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:28 crc kubenswrapper[4874]: I0217 16:25:28.989898 4874 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.034387 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-77965974bf-qbtfj"] Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.049294 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-config" (OuterVolumeSpecName: "config") pod "020a97a8-7c87-4098-a559-0584c148fbef" (UID: "020a97a8-7c87-4098-a559-0584c148fbef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.052186 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "020a97a8-7c87-4098-a559-0584c148fbef" (UID: "020a97a8-7c87-4098-a559-0584c148fbef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.065663 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "020a97a8-7c87-4098-a559-0584c148fbef" (UID: "020a97a8-7c87-4098-a559-0584c148fbef"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.095004 4874 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.095046 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.095063 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020a97a8-7c87-4098-a559-0584c148fbef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.296649 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-wjgkl"] Feb 17 16:25:29 crc kubenswrapper[4874]: W0217 16:25:29.308235 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3283562_95fd_4595_932e_cf95b3bdd769.slice/crio-e7f62fc9b213c35ec9ec383e8579bb5a580ec9d174863ac67f98dccf21d2344f WatchSource:0}: Error finding container e7f62fc9b213c35ec9ec383e8579bb5a580ec9d174863ac67f98dccf21d2344f: Status 404 returned error can't find the container with id e7f62fc9b213c35ec9ec383e8579bb5a580ec9d174863ac67f98dccf21d2344f Feb 17 16:25:29 crc kubenswrapper[4874]: W0217 16:25:29.309906 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7283b1_4828_4a90_bdd2_6861b7d6475b.slice/crio-4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c WatchSource:0}: Error finding container 4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c: Status 404 returned error can't find the container with id 4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.322561 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78b7864799-6ls5l"] Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.559926 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-698669dc7f-2q88l"] Feb 17 16:25:29 crc kubenswrapper[4874]: W0217 16:25:29.567664 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99a67b9d_37fa_411f_bfbe_321623f5d8fb.slice/crio-05b93c917baeaf4523227abb92e498aa5096e13d9849b9787dcee2dbd114be23 WatchSource:0}: Error finding container 05b93c917baeaf4523227abb92e498aa5096e13d9849b9787dcee2dbd114be23: Status 404 returned error can't find the container with id 05b93c917baeaf4523227abb92e498aa5096e13d9849b9787dcee2dbd114be23 Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.813167 4874 generic.go:334] "Generic (PLEG): container finished" podID="d3283562-95fd-4595-932e-cf95b3bdd769" containerID="cc446b11c68caa15068816519ca2b04d3ea13c42ef9fda2ec3706340878daca5" exitCode=0 Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.813457 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" event={"ID":"d3283562-95fd-4595-932e-cf95b3bdd769","Type":"ContainerDied","Data":"cc446b11c68caa15068816519ca2b04d3ea13c42ef9fda2ec3706340878daca5"} Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.813482 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" event={"ID":"d3283562-95fd-4595-932e-cf95b3bdd769","Type":"ContainerStarted","Data":"e7f62fc9b213c35ec9ec383e8579bb5a580ec9d174863ac67f98dccf21d2344f"} Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.818825 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77965974bf-qbtfj" event={"ID":"ce22ccd7-e053-4795-bf35-e1021cfeff9d","Type":"ContainerStarted","Data":"daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78"} Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.818878 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77965974bf-qbtfj" event={"ID":"ce22ccd7-e053-4795-bf35-e1021cfeff9d","Type":"ContainerStarted","Data":"df2d5f788694121fec6ef1421d7e574b94691296f648d1bffae0f910d6c9700c"} Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.820149 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.824191 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-698669dc7f-2q88l" event={"ID":"99a67b9d-37fa-411f-bfbe-321623f5d8fb","Type":"ContainerStarted","Data":"05b93c917baeaf4523227abb92e498aa5096e13d9849b9787dcee2dbd114be23"} Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.826317 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdc5b79b4-crwsk" Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.829196 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78b7864799-6ls5l" event={"ID":"fb7283b1-4828-4a90-bdd2-6861b7d6475b","Type":"ContainerStarted","Data":"4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c"} Feb 17 16:25:29 crc kubenswrapper[4874]: I0217 16:25:29.870575 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-77965974bf-qbtfj" podStartSLOduration=2.870559751 podStartE2EDuration="2.870559751s" podCreationTimestamp="2026-02-17 16:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:29.861730445 +0000 UTC m=+1340.156119006" watchObservedRunningTime="2026-02-17 16:25:29.870559751 +0000 UTC m=+1340.164948312" Feb 17 16:25:30 crc kubenswrapper[4874]: I0217 16:25:30.129162 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5bdc5b79b4-crwsk"] Feb 17 16:25:30 crc kubenswrapper[4874]: I0217 16:25:30.142267 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5bdc5b79b4-crwsk"] Feb 17 16:25:30 crc kubenswrapper[4874]: I0217 16:25:30.484685 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="020a97a8-7c87-4098-a559-0584c148fbef" path="/var/lib/kubelet/pods/020a97a8-7c87-4098-a559-0584c148fbef/volumes" Feb 17 16:25:30 crc kubenswrapper[4874]: I0217 16:25:30.850031 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" event={"ID":"d3283562-95fd-4595-932e-cf95b3bdd769","Type":"ContainerStarted","Data":"03dfd50f100bf00d0cba04e3c2f0676d778b83aca8ce2b98420b133b2a336636"} Feb 17 16:25:30 crc kubenswrapper[4874]: I0217 16:25:30.850116 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:30 crc kubenswrapper[4874]: I0217 16:25:30.871092 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" podStartSLOduration=3.871053738 podStartE2EDuration="3.871053738s" podCreationTimestamp="2026-02-17 16:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:30.868537637 +0000 UTC m=+1341.162926218" watchObservedRunningTime="2026-02-17 16:25:30.871053738 +0000 UTC m=+1341.165442309" Feb 17 16:25:34 crc kubenswrapper[4874]: I0217 16:25:34.204044 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.800017 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-59c46f7ffb-7jfhs"] Feb 17 16:25:35 crc kubenswrapper[4874]: E0217 16:25:35.800760 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020a97a8-7c87-4098-a559-0584c148fbef" containerName="neutron-api" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.800772 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="020a97a8-7c87-4098-a559-0584c148fbef" containerName="neutron-api" Feb 17 16:25:35 crc kubenswrapper[4874]: E0217 16:25:35.800786 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020a97a8-7c87-4098-a559-0584c148fbef" containerName="neutron-httpd" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.800791 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="020a97a8-7c87-4098-a559-0584c148fbef" containerName="neutron-httpd" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.801007 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="020a97a8-7c87-4098-a559-0584c148fbef" containerName="neutron-httpd" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.801021 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="020a97a8-7c87-4098-a559-0584c148fbef" containerName="neutron-api" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.801811 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.813443 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-59c46f7ffb-7jfhs"] Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.826260 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-b8d6fcf6-4n78j"] Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.827618 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.875028 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6998947cb9-hr7zv"] Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.877503 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.964952 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6998947cb9-hr7zv"] Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.968412 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8hx5\" (UniqueName: \"kubernetes.io/projected/5937d61b-9735-4fa9-b8ab-7441f71d4728-kube-api-access-p8hx5\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.968538 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-combined-ca-bundle\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.968595 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64hpf\" (UniqueName: \"kubernetes.io/projected/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-kube-api-access-64hpf\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.968698 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data-custom\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.968828 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.968897 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-config-data-custom\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.968971 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-combined-ca-bundle\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.968993 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-config-data\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:35 crc kubenswrapper[4874]: I0217 16:25:35.977304 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b8d6fcf6-4n78j"] Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071199 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data-custom\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071271 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-combined-ca-bundle\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071292 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071324 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-config-data\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071452 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8hx5\" (UniqueName: \"kubernetes.io/projected/5937d61b-9735-4fa9-b8ab-7441f71d4728-kube-api-access-p8hx5\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071578 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-combined-ca-bundle\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071613 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kbtt\" (UniqueName: \"kubernetes.io/projected/183c319d-de18-4198-bc81-7deedc9e9f35-kube-api-access-5kbtt\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071658 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-combined-ca-bundle\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071721 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64hpf\" (UniqueName: \"kubernetes.io/projected/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-kube-api-access-64hpf\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071771 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data-custom\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071871 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.071922 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-config-data-custom\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.078139 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-combined-ca-bundle\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.078148 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-config-data-custom\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.078525 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.080234 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data-custom\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.081593 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-config-data\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.088584 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8hx5\" (UniqueName: \"kubernetes.io/projected/5937d61b-9735-4fa9-b8ab-7441f71d4728-kube-api-access-p8hx5\") pod \"heat-cfnapi-b8d6fcf6-4n78j\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.088826 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-combined-ca-bundle\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.099829 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64hpf\" (UniqueName: \"kubernetes.io/projected/fa32dc95-3565-4a8a-82e7-97b9eaea1b32-kube-api-access-64hpf\") pod \"heat-engine-59c46f7ffb-7jfhs\" (UID: \"fa32dc95-3565-4a8a-82e7-97b9eaea1b32\") " pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.143459 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.171276 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.174230 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data-custom\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.174312 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.174415 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-combined-ca-bundle\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.174444 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kbtt\" (UniqueName: \"kubernetes.io/projected/183c319d-de18-4198-bc81-7deedc9e9f35-kube-api-access-5kbtt\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.179013 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-combined-ca-bundle\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.179173 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.179659 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data-custom\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.202992 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kbtt\" (UniqueName: \"kubernetes.io/projected/183c319d-de18-4198-bc81-7deedc9e9f35-kube-api-access-5kbtt\") pod \"heat-api-6998947cb9-hr7zv\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.209655 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.689801 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-558b9bddc9-tks6t"] Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.691859 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.694224 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.694379 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.694733 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.728571 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-558b9bddc9-tks6t"] Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.794576 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mr6f\" (UniqueName: \"kubernetes.io/projected/86d966a5-1838-4efd-bc2e-f19189a61789-kube-api-access-2mr6f\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.794671 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/86d966a5-1838-4efd-bc2e-f19189a61789-etc-swift\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.794702 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-combined-ca-bundle\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.794790 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-internal-tls-certs\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.794853 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-public-tls-certs\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.794915 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-config-data\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.794982 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86d966a5-1838-4efd-bc2e-f19189a61789-log-httpd\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.795002 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86d966a5-1838-4efd-bc2e-f19189a61789-run-httpd\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.896433 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-internal-tls-certs\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.896494 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-public-tls-certs\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.896524 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-config-data\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.896566 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86d966a5-1838-4efd-bc2e-f19189a61789-log-httpd\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.896589 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86d966a5-1838-4efd-bc2e-f19189a61789-run-httpd\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.896739 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mr6f\" (UniqueName: \"kubernetes.io/projected/86d966a5-1838-4efd-bc2e-f19189a61789-kube-api-access-2mr6f\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.896783 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/86d966a5-1838-4efd-bc2e-f19189a61789-etc-swift\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.896805 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-combined-ca-bundle\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.898165 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86d966a5-1838-4efd-bc2e-f19189a61789-log-httpd\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.898399 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86d966a5-1838-4efd-bc2e-f19189a61789-run-httpd\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.903367 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-config-data\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.906446 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-combined-ca-bundle\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.907059 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-public-tls-certs\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.908106 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/86d966a5-1838-4efd-bc2e-f19189a61789-etc-swift\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.916401 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86d966a5-1838-4efd-bc2e-f19189a61789-internal-tls-certs\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:36 crc kubenswrapper[4874]: I0217 16:25:36.919524 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mr6f\" (UniqueName: \"kubernetes.io/projected/86d966a5-1838-4efd-bc2e-f19189a61789-kube-api-access-2mr6f\") pod \"swift-proxy-558b9bddc9-tks6t\" (UID: \"86d966a5-1838-4efd-bc2e-f19189a61789\") " pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:37 crc kubenswrapper[4874]: I0217 16:25:37.023076 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.445318 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.519336 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8f8gg"] Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.520891 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" podUID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" containerName="dnsmasq-dns" containerID="cri-o://a73e1006f82695e15769264553bcea390be8051c12c8c1b4d49dcb59c250ddac" gracePeriod=10 Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.813950 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78b7864799-6ls5l"] Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.837583 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-698669dc7f-2q88l"] Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.854459 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-684fb5885c-hr4m8"] Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.856521 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.859825 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.860137 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.874986 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-684fb5885c-hr4m8"] Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.888806 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-77f9b8d4df-5ptz7"] Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.891171 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.897539 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.897713 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.908915 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjkcd\" (UniqueName: \"kubernetes.io/projected/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-kube-api-access-kjkcd\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.908980 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-public-tls-certs\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909000 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-config-data\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909026 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-internal-tls-certs\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909065 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-public-tls-certs\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909221 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-combined-ca-bundle\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909297 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-combined-ca-bundle\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909320 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-config-data-custom\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909335 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-internal-tls-certs\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909433 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58msh\" (UniqueName: \"kubernetes.io/projected/7cefe1b7-0d9c-4594-8368-15179b55592b-kube-api-access-58msh\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909476 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-config-data-custom\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.909498 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-config-data\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:38 crc kubenswrapper[4874]: I0217 16:25:38.911557 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77f9b8d4df-5ptz7"] Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011595 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-internal-tls-certs\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011665 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-public-tls-certs\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011689 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-combined-ca-bundle\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011745 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-combined-ca-bundle\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011769 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-config-data-custom\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011783 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-internal-tls-certs\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011865 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58msh\" (UniqueName: \"kubernetes.io/projected/7cefe1b7-0d9c-4594-8368-15179b55592b-kube-api-access-58msh\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011907 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-config-data-custom\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011930 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-config-data\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011966 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjkcd\" (UniqueName: \"kubernetes.io/projected/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-kube-api-access-kjkcd\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.011998 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-public-tls-certs\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.012016 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-config-data\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.017995 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" podUID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.204:5353: connect: connection refused" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.020203 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-internal-tls-certs\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.023595 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-public-tls-certs\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.036150 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-combined-ca-bundle\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.037285 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-config-data-custom\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.038165 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-combined-ca-bundle\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.046696 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-internal-tls-certs\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.046750 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-config-data\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.048662 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-public-tls-certs\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.049780 4874 generic.go:334] "Generic (PLEG): container finished" podID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" containerID="a73e1006f82695e15769264553bcea390be8051c12c8c1b4d49dcb59c250ddac" exitCode=0 Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.049851 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" event={"ID":"9ef65b51-8db2-4513-89dc-a6ec4c27c22d","Type":"ContainerDied","Data":"a73e1006f82695e15769264553bcea390be8051c12c8c1b4d49dcb59c250ddac"} Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.049954 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-config-data-custom\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.052166 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cefe1b7-0d9c-4594-8368-15179b55592b-config-data\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.053309 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjkcd\" (UniqueName: \"kubernetes.io/projected/b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71-kube-api-access-kjkcd\") pod \"heat-cfnapi-77f9b8d4df-5ptz7\" (UID: \"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71\") " pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.055773 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58msh\" (UniqueName: \"kubernetes.io/projected/7cefe1b7-0d9c-4594-8368-15179b55592b-kube-api-access-58msh\") pod \"heat-api-684fb5885c-hr4m8\" (UID: \"7cefe1b7-0d9c-4594-8368-15179b55592b\") " pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.110633 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.111171 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="ceilometer-central-agent" containerID="cri-o://9572c539cf08f40bfa92a72a4e59089ca390c0f8fdcb5665c32e36b88637e7a5" gracePeriod=30 Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.111262 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="proxy-httpd" containerID="cri-o://1dbbca4bbde89258084365d048d7ef0c9ea36ff9ab7f1a3b163bd87b0a130e03" gracePeriod=30 Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.111290 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="ceilometer-notification-agent" containerID="cri-o://da61174cd4463e433a554d53eb5b40f39e0c6bbdd8faccbf4b623f076f538f9b" gracePeriod=30 Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.111398 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="sg-core" containerID="cri-o://09239b10e4cf2ec51f9128a9d479ad47dadb97f1f9f7017e324303f6f3528e9f" gracePeriod=30 Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.212120 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.236535 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:39 crc kubenswrapper[4874]: I0217 16:25:39.515634 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:25:40 crc kubenswrapper[4874]: I0217 16:25:40.064322 4874 generic.go:334] "Generic (PLEG): container finished" podID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerID="09239b10e4cf2ec51f9128a9d479ad47dadb97f1f9f7017e324303f6f3528e9f" exitCode=2 Feb 17 16:25:40 crc kubenswrapper[4874]: I0217 16:25:40.064612 4874 generic.go:334] "Generic (PLEG): container finished" podID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerID="9572c539cf08f40bfa92a72a4e59089ca390c0f8fdcb5665c32e36b88637e7a5" exitCode=0 Feb 17 16:25:40 crc kubenswrapper[4874]: I0217 16:25:40.064382 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerDied","Data":"09239b10e4cf2ec51f9128a9d479ad47dadb97f1f9f7017e324303f6f3528e9f"} Feb 17 16:25:40 crc kubenswrapper[4874]: I0217 16:25:40.064680 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerDied","Data":"9572c539cf08f40bfa92a72a4e59089ca390c0f8fdcb5665c32e36b88637e7a5"} Feb 17 16:25:40 crc kubenswrapper[4874]: I0217 16:25:40.237801 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:25:40 crc kubenswrapper[4874]: I0217 16:25:40.238370 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerName="glance-log" containerID="cri-o://e2e62e569ab2a91fa0d6b81c17a0c32ecbc4bc391e57ad6e0d937471cd1196d1" gracePeriod=30 Feb 17 16:25:40 crc kubenswrapper[4874]: I0217 16:25:40.239007 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerName="glance-httpd" containerID="cri-o://c1c4a059e0bfc37b5cfb12008e01da9499f8ed035d0920acdcb712f5697767bf" gracePeriod=30 Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.058306 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.206:3000/\": dial tcp 10.217.0.206:3000: connect: connection refused" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.078878 4874 generic.go:334] "Generic (PLEG): container finished" podID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerID="1dbbca4bbde89258084365d048d7ef0c9ea36ff9ab7f1a3b163bd87b0a130e03" exitCode=0 Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.078975 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerDied","Data":"1dbbca4bbde89258084365d048d7ef0c9ea36ff9ab7f1a3b163bd87b0a130e03"} Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.081654 4874 generic.go:334] "Generic (PLEG): container finished" podID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerID="e2e62e569ab2a91fa0d6b81c17a0c32ecbc4bc391e57ad6e0d937471cd1196d1" exitCode=143 Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.081724 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57e36f0d-729f-4f32-9685-6453b7c550ac","Type":"ContainerDied","Data":"e2e62e569ab2a91fa0d6b81c17a0c32ecbc4bc391e57ad6e0d937471cd1196d1"} Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.638319 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.673066 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-sb\") pod \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.673160 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-nb\") pod \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.673198 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-config\") pod \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.673317 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-svc\") pod \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.673479 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-swift-storage-0\") pod \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.673529 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c859r\" (UniqueName: \"kubernetes.io/projected/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-kube-api-access-c859r\") pod \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\" (UID: \"9ef65b51-8db2-4513-89dc-a6ec4c27c22d\") " Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.679829 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-kube-api-access-c859r" (OuterVolumeSpecName: "kube-api-access-c859r") pod "9ef65b51-8db2-4513-89dc-a6ec4c27c22d" (UID: "9ef65b51-8db2-4513-89dc-a6ec4c27c22d"). InnerVolumeSpecName "kube-api-access-c859r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.761514 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ef65b51-8db2-4513-89dc-a6ec4c27c22d" (UID: "9ef65b51-8db2-4513-89dc-a6ec4c27c22d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.775220 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c859r\" (UniqueName: \"kubernetes.io/projected/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-kube-api-access-c859r\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.775253 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.791204 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ef65b51-8db2-4513-89dc-a6ec4c27c22d" (UID: "9ef65b51-8db2-4513-89dc-a6ec4c27c22d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.792764 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ef65b51-8db2-4513-89dc-a6ec4c27c22d" (UID: "9ef65b51-8db2-4513-89dc-a6ec4c27c22d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.808184 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-config" (OuterVolumeSpecName: "config") pod "9ef65b51-8db2-4513-89dc-a6ec4c27c22d" (UID: "9ef65b51-8db2-4513-89dc-a6ec4c27c22d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.809329 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ef65b51-8db2-4513-89dc-a6ec4c27c22d" (UID: "9ef65b51-8db2-4513-89dc-a6ec4c27c22d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.878798 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.878840 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.878852 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:41 crc kubenswrapper[4874]: I0217 16:25:41.878865 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef65b51-8db2-4513-89dc-a6ec4c27c22d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.001523 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.002056 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-log" containerID="cri-o://955b09375e4d1b05269a4a63a1baf8be8a1f1e4d8f5cb5b200dc59a9a2f74b3a" gracePeriod=30 Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.002372 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-httpd" containerID="cri-o://a101e4938d0685428284db4ed1f088160a322daf589c50dcb3efe6ef955984f2" gracePeriod=30 Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.160387 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" event={"ID":"9ef65b51-8db2-4513-89dc-a6ec4c27c22d","Type":"ContainerDied","Data":"728217a63911f6c67459b3090143d7e51e0ef253238bdf0f569a65b69100150a"} Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.160435 4874 scope.go:117] "RemoveContainer" containerID="a73e1006f82695e15769264553bcea390be8051c12c8c1b4d49dcb59c250ddac" Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.160589 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-8f8gg" Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.172203 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78b7864799-6ls5l" event={"ID":"fb7283b1-4828-4a90-bdd2-6861b7d6475b","Type":"ContainerStarted","Data":"a012ef2d85a425cd08b332f4ed4e1a9bad275e69a3962cce70123a76ed8faf78"} Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.172423 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-78b7864799-6ls5l" podUID="fb7283b1-4828-4a90-bdd2-6861b7d6475b" containerName="heat-api" containerID="cri-o://a012ef2d85a425cd08b332f4ed4e1a9bad275e69a3962cce70123a76ed8faf78" gracePeriod=60 Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.172580 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.176697 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77f9b8d4df-5ptz7"] Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.205241 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-684fb5885c-hr4m8"] Feb 17 16:25:42 crc kubenswrapper[4874]: W0217 16:25:42.220673 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cefe1b7_0d9c_4594_8368_15179b55592b.slice/crio-50620b40f48707db502bb97099f23fda054d094b5688c0c067869e1a0c27fc26 WatchSource:0}: Error finding container 50620b40f48707db502bb97099f23fda054d094b5688c0c067869e1a0c27fc26: Status 404 returned error can't find the container with id 50620b40f48707db502bb97099f23fda054d094b5688c0c067869e1a0c27fc26 Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.226853 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78b7864799-6ls5l" podStartSLOduration=2.341070823 podStartE2EDuration="14.226830623s" podCreationTimestamp="2026-02-17 16:25:28 +0000 UTC" firstStartedPulling="2026-02-17 16:25:29.323190641 +0000 UTC m=+1339.617579202" lastFinishedPulling="2026-02-17 16:25:41.208950441 +0000 UTC m=+1351.503339002" observedRunningTime="2026-02-17 16:25:42.193028867 +0000 UTC m=+1352.487417428" watchObservedRunningTime="2026-02-17 16:25:42.226830623 +0000 UTC m=+1352.521219184" Feb 17 16:25:42 crc kubenswrapper[4874]: W0217 16:25:42.259150 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa32dc95_3565_4a8a_82e7_97b9eaea1b32.slice/crio-2f3e897debb3ded55c5b5e9ddd0e3e5abdd8666de57b87b9232405a865490723 WatchSource:0}: Error finding container 2f3e897debb3ded55c5b5e9ddd0e3e5abdd8666de57b87b9232405a865490723: Status 404 returned error can't find the container with id 2f3e897debb3ded55c5b5e9ddd0e3e5abdd8666de57b87b9232405a865490723 Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.262215 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-59c46f7ffb-7jfhs"] Feb 17 16:25:42 crc kubenswrapper[4874]: W0217 16:25:42.278586 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod183c319d_de18_4198_bc81_7deedc9e9f35.slice/crio-33b3282997a4cb6f81c45368f7833c6bc3e0a3008fe4990c4a5b2b2895e4bb99 WatchSource:0}: Error finding container 33b3282997a4cb6f81c45368f7833c6bc3e0a3008fe4990c4a5b2b2895e4bb99: Status 404 returned error can't find the container with id 33b3282997a4cb6f81c45368f7833c6bc3e0a3008fe4990c4a5b2b2895e4bb99 Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.281326 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6998947cb9-hr7zv"] Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.293856 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-558b9bddc9-tks6t"] Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.489817 4874 scope.go:117] "RemoveContainer" containerID="77e8cb5312ce901dbec6e06966d68c1d04ee2e5b7c505d96db0c64d610fe62e5" Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.503366 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8f8gg"] Feb 17 16:25:42 crc kubenswrapper[4874]: W0217 16:25:42.517419 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5937d61b_9735_4fa9_b8ab_7441f71d4728.slice/crio-968ec99c325f3c41cbdd00e6de27b96572339c10c5c5e5ce9b7b44770db2a852 WatchSource:0}: Error finding container 968ec99c325f3c41cbdd00e6de27b96572339c10c5c5e5ce9b7b44770db2a852: Status 404 returned error can't find the container with id 968ec99c325f3c41cbdd00e6de27b96572339c10c5c5e5ce9b7b44770db2a852 Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.521440 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-8f8gg"] Feb 17 16:25:42 crc kubenswrapper[4874]: I0217 16:25:42.532679 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b8d6fcf6-4n78j"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.002114 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jftpf"] Feb 17 16:25:43 crc kubenswrapper[4874]: E0217 16:25:43.002756 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" containerName="init" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.002773 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" containerName="init" Feb 17 16:25:43 crc kubenswrapper[4874]: E0217 16:25:43.002786 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" containerName="dnsmasq-dns" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.002792 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" containerName="dnsmasq-dns" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.003020 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" containerName="dnsmasq-dns" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.003756 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.047943 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jftpf"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.116603 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjbh\" (UniqueName: \"kubernetes.io/projected/eca34a46-69f7-4e13-8392-04acc4ea650e-kube-api-access-dmjbh\") pod \"nova-api-db-create-jftpf\" (UID: \"eca34a46-69f7-4e13-8392-04acc4ea650e\") " pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.116840 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca34a46-69f7-4e13-8392-04acc4ea650e-operator-scripts\") pod \"nova-api-db-create-jftpf\" (UID: \"eca34a46-69f7-4e13-8392-04acc4ea650e\") " pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.214691 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-h74j4"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.220861 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.227260 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca34a46-69f7-4e13-8392-04acc4ea650e-operator-scripts\") pod \"nova-api-db-create-jftpf\" (UID: \"eca34a46-69f7-4e13-8392-04acc4ea650e\") " pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.227584 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmjbh\" (UniqueName: \"kubernetes.io/projected/eca34a46-69f7-4e13-8392-04acc4ea650e-kube-api-access-dmjbh\") pod \"nova-api-db-create-jftpf\" (UID: \"eca34a46-69f7-4e13-8392-04acc4ea650e\") " pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.228810 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca34a46-69f7-4e13-8392-04acc4ea650e-operator-scripts\") pod \"nova-api-db-create-jftpf\" (UID: \"eca34a46-69f7-4e13-8392-04acc4ea650e\") " pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.235071 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-698669dc7f-2q88l" event={"ID":"99a67b9d-37fa-411f-bfbe-321623f5d8fb","Type":"ContainerStarted","Data":"205a70de0672725bf7638520f4240e801449e97038002d0756b185dd39d41736"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.235317 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-698669dc7f-2q88l" podUID="99a67b9d-37fa-411f-bfbe-321623f5d8fb" containerName="heat-cfnapi" containerID="cri-o://205a70de0672725bf7638520f4240e801449e97038002d0756b185dd39d41736" gracePeriod=60 Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.235418 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.252885 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ad509da0-c1a5-4dee-828c-783853098ee5","Type":"ContainerStarted","Data":"7a0071498849dd8c0e732426f834135845df9f68621a3ed29f841565e93f5129"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.258775 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmjbh\" (UniqueName: \"kubernetes.io/projected/eca34a46-69f7-4e13-8392-04acc4ea650e-kube-api-access-dmjbh\") pod \"nova-api-db-create-jftpf\" (UID: \"eca34a46-69f7-4e13-8392-04acc4ea650e\") " pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.278514 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" event={"ID":"5937d61b-9735-4fa9-b8ab-7441f71d4728","Type":"ContainerStarted","Data":"adeb8ca065576287990a6531e06b15474eea03f1b392ba5d78085e11de2b12a9"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.278554 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" event={"ID":"5937d61b-9735-4fa9-b8ab-7441f71d4728","Type":"ContainerStarted","Data":"968ec99c325f3c41cbdd00e6de27b96572339c10c5c5e5ce9b7b44770db2a852"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.283164 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.285413 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-684fb5885c-hr4m8" event={"ID":"7cefe1b7-0d9c-4594-8368-15179b55592b","Type":"ContainerStarted","Data":"f6df843d1203ceadecd32118932f01b7346cb645ca357583ad0859b7a5538748"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.285448 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-684fb5885c-hr4m8" event={"ID":"7cefe1b7-0d9c-4594-8368-15179b55592b","Type":"ContainerStarted","Data":"50620b40f48707db502bb97099f23fda054d094b5688c0c067869e1a0c27fc26"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.286052 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.305859 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-558b9bddc9-tks6t" event={"ID":"86d966a5-1838-4efd-bc2e-f19189a61789","Type":"ContainerStarted","Data":"af0dfb62b242dea235bd86febf5731ddd675bcac0688c4de42843705cef4280f"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.305898 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-558b9bddc9-tks6t" event={"ID":"86d966a5-1838-4efd-bc2e-f19189a61789","Type":"ContainerStarted","Data":"bca001d67287124197d52bcb5124b981893f2ef2aa3bb6db38d4de0553e3fee4"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.323239 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-h74j4"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.332462 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnkz9\" (UniqueName: \"kubernetes.io/projected/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-kube-api-access-wnkz9\") pod \"nova-cell0-db-create-h74j4\" (UID: \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\") " pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.332706 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-operator-scripts\") pod \"nova-cell0-db-create-h74j4\" (UID: \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\") " pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.343569 4874 generic.go:334] "Generic (PLEG): container finished" podID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerID="955b09375e4d1b05269a4a63a1baf8be8a1f1e4d8f5cb5b200dc59a9a2f74b3a" exitCode=143 Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.343677 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f","Type":"ContainerDied","Data":"955b09375e4d1b05269a4a63a1baf8be8a1f1e4d8f5cb5b200dc59a9a2f74b3a"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.350009 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-698669dc7f-2q88l" podStartSLOduration=3.606485391 podStartE2EDuration="15.349973335s" podCreationTimestamp="2026-02-17 16:25:28 +0000 UTC" firstStartedPulling="2026-02-17 16:25:29.569854706 +0000 UTC m=+1339.864243267" lastFinishedPulling="2026-02-17 16:25:41.31334265 +0000 UTC m=+1351.607731211" observedRunningTime="2026-02-17 16:25:43.262434528 +0000 UTC m=+1353.556823079" watchObservedRunningTime="2026-02-17 16:25:43.349973335 +0000 UTC m=+1353.644361916" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.362639 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" event={"ID":"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71","Type":"ContainerStarted","Data":"a3f9ae1c124ee09efcbd14e356db314e87ca6935675c2ae3806ded3ebcbba2f6"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.362686 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" event={"ID":"b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71","Type":"ContainerStarted","Data":"73fbf1ce3ff0a0e9047aa13e25f5ca6afad1472150cef2e252707aa86af85637"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.364141 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.382935 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-59c46f7ffb-7jfhs" event={"ID":"fa32dc95-3565-4a8a-82e7-97b9eaea1b32","Type":"ContainerStarted","Data":"235c72716e600191ed3a65b5e514583706a597f2189ec5f64df253e2a91809c8"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.382972 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-59c46f7ffb-7jfhs" event={"ID":"fa32dc95-3565-4a8a-82e7-97b9eaea1b32","Type":"ContainerStarted","Data":"2f3e897debb3ded55c5b5e9ddd0e3e5abdd8666de57b87b9232405a865490723"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.384036 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.400811 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6998947cb9-hr7zv" event={"ID":"183c319d-de18-4198-bc81-7deedc9e9f35","Type":"ContainerStarted","Data":"8a0e7c44d91f3400b00068ed1486ce4dfddb08e967cc5110bfb934452d8202a3"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.400853 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6998947cb9-hr7zv" event={"ID":"183c319d-de18-4198-bc81-7deedc9e9f35","Type":"ContainerStarted","Data":"33b3282997a4cb6f81c45368f7833c6bc3e0a3008fe4990c4a5b2b2895e4bb99"} Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.401786 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-s2gpr"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.403184 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.403257 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.422918 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.726980669 podStartE2EDuration="20.422897806s" podCreationTimestamp="2026-02-17 16:25:23 +0000 UTC" firstStartedPulling="2026-02-17 16:25:24.609890449 +0000 UTC m=+1334.904279010" lastFinishedPulling="2026-02-17 16:25:41.305807586 +0000 UTC m=+1351.600196147" observedRunningTime="2026-02-17 16:25:43.305380937 +0000 UTC m=+1353.599769498" watchObservedRunningTime="2026-02-17 16:25:43.422897806 +0000 UTC m=+1353.717286367" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.435566 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-operator-scripts\") pod \"nova-cell0-db-create-h74j4\" (UID: \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\") " pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.435654 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnkz9\" (UniqueName: \"kubernetes.io/projected/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-kube-api-access-wnkz9\") pod \"nova-cell0-db-create-h74j4\" (UID: \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\") " pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.437375 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-operator-scripts\") pod \"nova-cell0-db-create-h74j4\" (UID: \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\") " pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.470461 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.486012 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnkz9\" (UniqueName: \"kubernetes.io/projected/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-kube-api-access-wnkz9\") pod \"nova-cell0-db-create-h74j4\" (UID: \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\") " pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.537396 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cadec02-ee87-4bed-a039-d46a59f7e25f-operator-scripts\") pod \"nova-cell1-db-create-s2gpr\" (UID: \"2cadec02-ee87-4bed-a039-d46a59f7e25f\") " pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.537725 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27wj\" (UniqueName: \"kubernetes.io/projected/2cadec02-ee87-4bed-a039-d46a59f7e25f-kube-api-access-z27wj\") pod \"nova-cell1-db-create-s2gpr\" (UID: \"2cadec02-ee87-4bed-a039-d46a59f7e25f\") " pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.539827 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-fee4-account-create-update-n9l5c"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.549260 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.576693 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s2gpr"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.582225 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.607055 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fee4-account-create-update-n9l5c"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.621014 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" podStartSLOduration=8.620996725 podStartE2EDuration="8.620996725s" podCreationTimestamp="2026-02-17 16:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:43.353554882 +0000 UTC m=+1353.647943463" watchObservedRunningTime="2026-02-17 16:25:43.620996725 +0000 UTC m=+1353.915385286" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.643181 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cadec02-ee87-4bed-a039-d46a59f7e25f-operator-scripts\") pod \"nova-cell1-db-create-s2gpr\" (UID: \"2cadec02-ee87-4bed-a039-d46a59f7e25f\") " pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.643308 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-operator-scripts\") pod \"nova-api-fee4-account-create-update-n9l5c\" (UID: \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\") " pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.643394 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp268\" (UniqueName: \"kubernetes.io/projected/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-kube-api-access-gp268\") pod \"nova-api-fee4-account-create-update-n9l5c\" (UID: \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\") " pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.643519 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z27wj\" (UniqueName: \"kubernetes.io/projected/2cadec02-ee87-4bed-a039-d46a59f7e25f-kube-api-access-z27wj\") pod \"nova-cell1-db-create-s2gpr\" (UID: \"2cadec02-ee87-4bed-a039-d46a59f7e25f\") " pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.648537 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cadec02-ee87-4bed-a039-d46a59f7e25f-operator-scripts\") pod \"nova-cell1-db-create-s2gpr\" (UID: \"2cadec02-ee87-4bed-a039-d46a59f7e25f\") " pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.666517 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-684fb5885c-hr4m8" podStartSLOduration=5.666500096 podStartE2EDuration="5.666500096s" podCreationTimestamp="2026-02-17 16:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:43.37557299 +0000 UTC m=+1353.669961571" watchObservedRunningTime="2026-02-17 16:25:43.666500096 +0000 UTC m=+1353.960888657" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.676906 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z27wj\" (UniqueName: \"kubernetes.io/projected/2cadec02-ee87-4bed-a039-d46a59f7e25f-kube-api-access-z27wj\") pod \"nova-cell1-db-create-s2gpr\" (UID: \"2cadec02-ee87-4bed-a039-d46a59f7e25f\") " pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.698329 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.713635 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" podStartSLOduration=5.713615057 podStartE2EDuration="5.713615057s" podCreationTimestamp="2026-02-17 16:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:43.40667842 +0000 UTC m=+1353.701067001" watchObservedRunningTime="2026-02-17 16:25:43.713615057 +0000 UTC m=+1354.008003618" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.745707 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-operator-scripts\") pod \"nova-api-fee4-account-create-update-n9l5c\" (UID: \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\") " pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.745778 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp268\" (UniqueName: \"kubernetes.io/projected/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-kube-api-access-gp268\") pod \"nova-api-fee4-account-create-update-n9l5c\" (UID: \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\") " pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.746366 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-59c46f7ffb-7jfhs" podStartSLOduration=8.736056525 podStartE2EDuration="8.736056525s" podCreationTimestamp="2026-02-17 16:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:43.435513434 +0000 UTC m=+1353.729902015" watchObservedRunningTime="2026-02-17 16:25:43.736056525 +0000 UTC m=+1354.030445096" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.746448 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-operator-scripts\") pod \"nova-api-fee4-account-create-update-n9l5c\" (UID: \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\") " pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.752987 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.766796 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp268\" (UniqueName: \"kubernetes.io/projected/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-kube-api-access-gp268\") pod \"nova-api-fee4-account-create-update-n9l5c\" (UID: \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\") " pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.803226 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6998947cb9-hr7zv" podStartSLOduration=8.803204225 podStartE2EDuration="8.803204225s" podCreationTimestamp="2026-02-17 16:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:43.463811705 +0000 UTC m=+1353.758200266" watchObservedRunningTime="2026-02-17 16:25:43.803204225 +0000 UTC m=+1354.097592786" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.871905 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0107-account-create-update-rhzxh"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.877744 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.881716 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.903617 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0107-account-create-update-rhzxh"] Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.939539 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.953860 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-operator-scripts\") pod \"nova-cell0-0107-account-create-update-rhzxh\" (UID: \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\") " pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:43 crc kubenswrapper[4874]: I0217 16:25:43.953992 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wljt\" (UniqueName: \"kubernetes.io/projected/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-kube-api-access-8wljt\") pod \"nova-cell0-0107-account-create-update-rhzxh\" (UID: \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\") " pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.003190 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b434-account-create-update-zrmkx"] Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.005220 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.008350 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 16:25:44 crc kubenswrapper[4874]: E0217 16:25:44.036683 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f8675b_8a6e_41dc_8368_a5ad3ff38fd0.slice/crio-conmon-da61174cd4463e433a554d53eb5b40f39e0c6bbdd8faccbf4b623f076f538f9b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f8675b_8a6e_41dc_8368_a5ad3ff38fd0.slice/crio-da61174cd4463e433a554d53eb5b40f39e0c6bbdd8faccbf4b623f076f538f9b.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.046142 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b434-account-create-update-zrmkx"] Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.065568 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk8dv\" (UniqueName: \"kubernetes.io/projected/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-kube-api-access-xk8dv\") pod \"nova-cell1-b434-account-create-update-zrmkx\" (UID: \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\") " pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.065690 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-operator-scripts\") pod \"nova-cell1-b434-account-create-update-zrmkx\" (UID: \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\") " pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.065824 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-operator-scripts\") pod \"nova-cell0-0107-account-create-update-rhzxh\" (UID: \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\") " pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.066026 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wljt\" (UniqueName: \"kubernetes.io/projected/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-kube-api-access-8wljt\") pod \"nova-cell0-0107-account-create-update-rhzxh\" (UID: \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\") " pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.068170 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-operator-scripts\") pod \"nova-cell0-0107-account-create-update-rhzxh\" (UID: \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\") " pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.099957 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wljt\" (UniqueName: \"kubernetes.io/projected/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-kube-api-access-8wljt\") pod \"nova-cell0-0107-account-create-update-rhzxh\" (UID: \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\") " pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.171303 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk8dv\" (UniqueName: \"kubernetes.io/projected/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-kube-api-access-xk8dv\") pod \"nova-cell1-b434-account-create-update-zrmkx\" (UID: \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\") " pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.173090 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-operator-scripts\") pod \"nova-cell1-b434-account-create-update-zrmkx\" (UID: \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\") " pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.174582 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-operator-scripts\") pod \"nova-cell1-b434-account-create-update-zrmkx\" (UID: \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\") " pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.247684 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk8dv\" (UniqueName: \"kubernetes.io/projected/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-kube-api-access-xk8dv\") pod \"nova-cell1-b434-account-create-update-zrmkx\" (UID: \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\") " pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.248247 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.340782 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.397742 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jftpf"] Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.426483 4874 generic.go:334] "Generic (PLEG): container finished" podID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerID="c1c4a059e0bfc37b5cfb12008e01da9499f8ed035d0920acdcb712f5697767bf" exitCode=0 Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.426784 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57e36f0d-729f-4f32-9685-6453b7c550ac","Type":"ContainerDied","Data":"c1c4a059e0bfc37b5cfb12008e01da9499f8ed035d0920acdcb712f5697767bf"} Feb 17 16:25:44 crc kubenswrapper[4874]: W0217 16:25:44.436253 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca34a46_69f7_4e13_8392_04acc4ea650e.slice/crio-fc5ffa55229f3b260c2ce00666fce78efeec0262a42fbc4f19e3c19d32185028 WatchSource:0}: Error finding container fc5ffa55229f3b260c2ce00666fce78efeec0262a42fbc4f19e3c19d32185028: Status 404 returned error can't find the container with id fc5ffa55229f3b260c2ce00666fce78efeec0262a42fbc4f19e3c19d32185028 Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.436852 4874 generic.go:334] "Generic (PLEG): container finished" podID="183c319d-de18-4198-bc81-7deedc9e9f35" containerID="8a0e7c44d91f3400b00068ed1486ce4dfddb08e967cc5110bfb934452d8202a3" exitCode=1 Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.437350 4874 scope.go:117] "RemoveContainer" containerID="8a0e7c44d91f3400b00068ed1486ce4dfddb08e967cc5110bfb934452d8202a3" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.437902 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6998947cb9-hr7zv" event={"ID":"183c319d-de18-4198-bc81-7deedc9e9f35","Type":"ContainerDied","Data":"8a0e7c44d91f3400b00068ed1486ce4dfddb08e967cc5110bfb934452d8202a3"} Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.447218 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-558b9bddc9-tks6t" event={"ID":"86d966a5-1838-4efd-bc2e-f19189a61789","Type":"ContainerStarted","Data":"d06805c58759564c54b71d6a6c36f3d4a2787a1b39d6cb863b0529a8f13e16bb"} Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.447300 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.447849 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.460336 4874 generic.go:334] "Generic (PLEG): container finished" podID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerID="da61174cd4463e433a554d53eb5b40f39e0c6bbdd8faccbf4b623f076f538f9b" exitCode=0 Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.473172 4874 generic.go:334] "Generic (PLEG): container finished" podID="5937d61b-9735-4fa9-b8ab-7441f71d4728" containerID="adeb8ca065576287990a6531e06b15474eea03f1b392ba5d78085e11de2b12a9" exitCode=1 Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.475010 4874 scope.go:117] "RemoveContainer" containerID="adeb8ca065576287990a6531e06b15474eea03f1b392ba5d78085e11de2b12a9" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.530940 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-558b9bddc9-tks6t" podStartSLOduration=8.53092256 podStartE2EDuration="8.53092256s" podCreationTimestamp="2026-02-17 16:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:44.485851609 +0000 UTC m=+1354.780240170" watchObservedRunningTime="2026-02-17 16:25:44.53092256 +0000 UTC m=+1354.825311121" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.603773 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef65b51-8db2-4513-89dc-a6ec4c27c22d" path="/var/lib/kubelet/pods/9ef65b51-8db2-4513-89dc-a6ec4c27c22d/volumes" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.605567 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerDied","Data":"da61174cd4463e433a554d53eb5b40f39e0c6bbdd8faccbf4b623f076f538f9b"} Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.605617 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" event={"ID":"5937d61b-9735-4fa9-b8ab-7441f71d4728","Type":"ContainerDied","Data":"adeb8ca065576287990a6531e06b15474eea03f1b392ba5d78085e11de2b12a9"} Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.695533 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.790612 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-h74j4"] Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.817885 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-combined-ca-bundle\") pod \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.817986 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-run-httpd\") pod \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.818056 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lbdv\" (UniqueName: \"kubernetes.io/projected/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-kube-api-access-9lbdv\") pod \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.822859 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-config-data\") pod \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.823027 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-sg-core-conf-yaml\") pod \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.823131 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-log-httpd\") pod \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.823165 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-scripts\") pod \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\" (UID: \"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0\") " Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.831729 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-kube-api-access-9lbdv" (OuterVolumeSpecName: "kube-api-access-9lbdv") pod "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" (UID: "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0"). InnerVolumeSpecName "kube-api-access-9lbdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.832158 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" (UID: "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.832798 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-scripts" (OuterVolumeSpecName: "scripts") pod "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" (UID: "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.926091 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" (UID: "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.927867 4874 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.927903 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.927912 4874 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:44 crc kubenswrapper[4874]: I0217 16:25:44.927920 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lbdv\" (UniqueName: \"kubernetes.io/projected/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-kube-api-access-9lbdv\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.117008 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" (UID: "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.147027 4874 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.193497 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-config-data" (OuterVolumeSpecName: "config-data") pod "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" (UID: "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.235969 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" (UID: "e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.249058 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.249106 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: W0217 16:25:45.309766 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cadec02_ee87_4bed_a039_d46a59f7e25f.slice/crio-8a190339b26d15b1f4eac65b681f83dadfa91d9a74278093493ce970f48a29d1 WatchSource:0}: Error finding container 8a190339b26d15b1f4eac65b681f83dadfa91d9a74278093493ce970f48a29d1: Status 404 returned error can't find the container with id 8a190339b26d15b1f4eac65b681f83dadfa91d9a74278093493ce970f48a29d1 Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.311737 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s2gpr"] Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.346873 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0107-account-create-update-rhzxh"] Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.384848 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fee4-account-create-update-n9l5c"] Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.385220 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.416808 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b434-account-create-update-zrmkx"] Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.500430 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6998947cb9-hr7zv" event={"ID":"183c319d-de18-4198-bc81-7deedc9e9f35","Type":"ContainerStarted","Data":"e0a8ce786f6aa34a725dfd16864fdd4a01819d7f2955760100c57db379b90bca"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.503580 4874 generic.go:334] "Generic (PLEG): container finished" podID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerID="a101e4938d0685428284db4ed1f088160a322daf589c50dcb3efe6ef955984f2" exitCode=0 Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.503635 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f","Type":"ContainerDied","Data":"a101e4938d0685428284db4ed1f088160a322daf589c50dcb3efe6ef955984f2"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.514569 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jftpf" event={"ID":"eca34a46-69f7-4e13-8392-04acc4ea650e","Type":"ContainerStarted","Data":"3ba4101d65ab301c3f3a66a2850065fbb946cb24ba7f848786c185c65d6e5e46"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.514611 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jftpf" event={"ID":"eca34a46-69f7-4e13-8392-04acc4ea650e","Type":"ContainerStarted","Data":"fc5ffa55229f3b260c2ce00666fce78efeec0262a42fbc4f19e3c19d32185028"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.517435 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0107-account-create-update-rhzxh" event={"ID":"a309e0fa-75e0-4d58-92cc-09a4dbf446d4","Type":"ContainerStarted","Data":"a26b9134358b1299ca379991fec0dbd83aa8361ca56ad06dd4e4e94b49fa400c"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.519809 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"57e36f0d-729f-4f32-9685-6453b7c550ac","Type":"ContainerDied","Data":"fc0a8dec5b627fb6fb88c09afd086bc5a1178fa425baa0bbc9c9dad7efc8269e"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.519862 4874 scope.go:117] "RemoveContainer" containerID="c1c4a059e0bfc37b5cfb12008e01da9499f8ed035d0920acdcb712f5697767bf" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.520029 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.527550 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s2gpr" event={"ID":"2cadec02-ee87-4bed-a039-d46a59f7e25f","Type":"ContainerStarted","Data":"8a190339b26d15b1f4eac65b681f83dadfa91d9a74278093493ce970f48a29d1"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.533314 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h74j4" event={"ID":"2d55ab4e-9dab-4fad-8eb6-d2685a59f417","Type":"ContainerStarted","Data":"87f37962dda4a55bb76d9ea59a0032e72085685fb36fab268dccb3ca79263d51"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.555715 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0","Type":"ContainerDied","Data":"a9f7302b0be43452c4f111347cd00bc8490acc641758c68aed6e3f05c533d0a0"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.555846 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.558351 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b434-account-create-update-zrmkx" event={"ID":"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204","Type":"ContainerStarted","Data":"4730acaafbf30763771a68acea3f5d45afc7f4dbb1d821401df5f92b4d6a9c8b"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.558677 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-scripts\") pod \"57e36f0d-729f-4f32-9685-6453b7c550ac\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.558931 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"57e36f0d-729f-4f32-9685-6453b7c550ac\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.558960 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-config-data\") pod \"57e36f0d-729f-4f32-9685-6453b7c550ac\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.560142 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n8tq\" (UniqueName: \"kubernetes.io/projected/57e36f0d-729f-4f32-9685-6453b7c550ac-kube-api-access-9n8tq\") pod \"57e36f0d-729f-4f32-9685-6453b7c550ac\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.560217 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-httpd-run\") pod \"57e36f0d-729f-4f32-9685-6453b7c550ac\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.560281 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-combined-ca-bundle\") pod \"57e36f0d-729f-4f32-9685-6453b7c550ac\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.560312 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-public-tls-certs\") pod \"57e36f0d-729f-4f32-9685-6453b7c550ac\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.560334 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-logs\") pod \"57e36f0d-729f-4f32-9685-6453b7c550ac\" (UID: \"57e36f0d-729f-4f32-9685-6453b7c550ac\") " Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.564873 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "57e36f0d-729f-4f32-9685-6453b7c550ac" (UID: "57e36f0d-729f-4f32-9685-6453b7c550ac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.571860 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-logs" (OuterVolumeSpecName: "logs") pod "57e36f0d-729f-4f32-9685-6453b7c550ac" (UID: "57e36f0d-729f-4f32-9685-6453b7c550ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.585939 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-scripts" (OuterVolumeSpecName: "scripts") pod "57e36f0d-729f-4f32-9685-6453b7c550ac" (UID: "57e36f0d-729f-4f32-9685-6453b7c550ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.585988 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" event={"ID":"5937d61b-9735-4fa9-b8ab-7441f71d4728","Type":"ContainerStarted","Data":"58a99f2a14bf7c7d1028c5894c4bdd6fedb57ad94aba6b728d996341d690f3d3"} Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.586002 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e36f0d-729f-4f32-9685-6453b7c550ac-kube-api-access-9n8tq" (OuterVolumeSpecName: "kube-api-access-9n8tq") pod "57e36f0d-729f-4f32-9685-6453b7c550ac" (UID: "57e36f0d-729f-4f32-9685-6453b7c550ac"). InnerVolumeSpecName "kube-api-access-9n8tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.589061 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.603979 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee" (OuterVolumeSpecName: "glance") pod "57e36f0d-729f-4f32-9685-6453b7c550ac" (UID: "57e36f0d-729f-4f32-9685-6453b7c550ac"). InnerVolumeSpecName "pvc-41894387-c8f2-4994-9975-d3df0f7781ee". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.625622 4874 scope.go:117] "RemoveContainer" containerID="e2e62e569ab2a91fa0d6b81c17a0c32ecbc4bc391e57ad6e0d937471cd1196d1" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.666796 4874 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") on node \"crc\" " Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.666824 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n8tq\" (UniqueName: \"kubernetes.io/projected/57e36f0d-729f-4f32-9685-6453b7c550ac-kube-api-access-9n8tq\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.666935 4874 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.666951 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e36f0d-729f-4f32-9685-6453b7c550ac-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.666960 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.691382 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.704751 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.721840 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:45 crc kubenswrapper[4874]: E0217 16:25:45.722336 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="ceilometer-notification-agent" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722349 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="ceilometer-notification-agent" Feb 17 16:25:45 crc kubenswrapper[4874]: E0217 16:25:45.722357 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="proxy-httpd" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722363 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="proxy-httpd" Feb 17 16:25:45 crc kubenswrapper[4874]: E0217 16:25:45.722385 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="ceilometer-central-agent" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722392 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="ceilometer-central-agent" Feb 17 16:25:45 crc kubenswrapper[4874]: E0217 16:25:45.722421 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerName="glance-httpd" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722429 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerName="glance-httpd" Feb 17 16:25:45 crc kubenswrapper[4874]: E0217 16:25:45.722439 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="sg-core" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722446 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="sg-core" Feb 17 16:25:45 crc kubenswrapper[4874]: E0217 16:25:45.722466 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerName="glance-log" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722473 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerName="glance-log" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722672 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="ceilometer-central-agent" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722703 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerName="glance-httpd" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722717 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="proxy-httpd" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722732 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="sg-core" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722747 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e36f0d-729f-4f32-9685-6453b7c550ac" containerName="glance-log" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.722764 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" containerName="ceilometer-notification-agent" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.725189 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.730739 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.734455 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.748729 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.818563 4874 scope.go:117] "RemoveContainer" containerID="1dbbca4bbde89258084365d048d7ef0c9ea36ff9ab7f1a3b163bd87b0a130e03" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.856577 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-config-data" (OuterVolumeSpecName: "config-data") pod "57e36f0d-729f-4f32-9685-6453b7c550ac" (UID: "57e36f0d-729f-4f32-9685-6453b7c550ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.883649 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": dial tcp 10.217.0.191:9292: connect: connection refused" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.884054 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": dial tcp 10.217.0.191:9292: connect: connection refused" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.889356 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57e36f0d-729f-4f32-9685-6453b7c550ac" (UID: "57e36f0d-729f-4f32-9685-6453b7c550ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.890897 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-run-httpd\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.891178 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-scripts\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.893114 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-config-data\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.893149 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.893358 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.893446 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-log-httpd\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.893502 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snpmd\" (UniqueName: \"kubernetes.io/projected/de75d382-99ba-4a94-8ab6-036d9fa19281-kube-api-access-snpmd\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.893666 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.893686 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.917792 4874 scope.go:117] "RemoveContainer" containerID="09239b10e4cf2ec51f9128a9d479ad47dadb97f1f9f7017e324303f6f3528e9f" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.943996 4874 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.944178 4874 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-41894387-c8f2-4994-9975-d3df0f7781ee" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee") on node "crc" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.961317 4874 scope.go:117] "RemoveContainer" containerID="da61174cd4463e433a554d53eb5b40f39e0c6bbdd8faccbf4b623f076f538f9b" Feb 17 16:25:45 crc kubenswrapper[4874]: I0217 16:25:45.990737 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "57e36f0d-729f-4f32-9685-6453b7c550ac" (UID: "57e36f0d-729f-4f32-9685-6453b7c550ac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.002154 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-config-data\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.002213 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.002285 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.002343 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-log-httpd\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.002385 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snpmd\" (UniqueName: \"kubernetes.io/projected/de75d382-99ba-4a94-8ab6-036d9fa19281-kube-api-access-snpmd\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.002470 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-run-httpd\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.002634 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-scripts\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.002869 4874 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e36f0d-729f-4f32-9685-6453b7c550ac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.003456 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-log-httpd\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.003748 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-run-httpd\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.003846 4874 reconciler_common.go:293] "Volume detached for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.006438 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-scripts\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.008405 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-config-data\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.009555 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.009683 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.026527 4874 scope.go:117] "RemoveContainer" containerID="9572c539cf08f40bfa92a72a4e59089ca390c0f8fdcb5665c32e36b88637e7a5" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.030521 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snpmd\" (UniqueName: \"kubernetes.io/projected/de75d382-99ba-4a94-8ab6-036d9fa19281-kube-api-access-snpmd\") pod \"ceilometer-0\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.166099 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.210843 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.336523 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.354142 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.373868 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.380487 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.380606 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.385116 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.385316 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.516121 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e36f0d-729f-4f32-9685-6453b7c550ac" path="/var/lib/kubelet/pods/57e36f0d-729f-4f32-9685-6453b7c550ac/volumes" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.517287 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0" path="/var/lib/kubelet/pods/e7f8675b-8a6e-41dc-8368-a5ad3ff38fd0/volumes" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.528566 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-scripts\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.528626 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60de1cc2-3d8e-445b-b882-14385d944a1b-logs\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.528669 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.528780 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.529069 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxbf\" (UniqueName: \"kubernetes.io/projected/60de1cc2-3d8e-445b-b882-14385d944a1b-kube-api-access-4nxbf\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.529112 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.529557 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-config-data\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.529632 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60de1cc2-3d8e-445b-b882-14385d944a1b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.633946 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-config-data\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.634032 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60de1cc2-3d8e-445b-b882-14385d944a1b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.634169 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-scripts\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.634194 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60de1cc2-3d8e-445b-b882-14385d944a1b-logs\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.634218 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.634274 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.634409 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxbf\" (UniqueName: \"kubernetes.io/projected/60de1cc2-3d8e-445b-b882-14385d944a1b-kube-api-access-4nxbf\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.634431 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.635277 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60de1cc2-3d8e-445b-b882-14385d944a1b-logs\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.635497 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60de1cc2-3d8e-445b-b882-14385d944a1b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.650146 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.656775 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-config-data\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.659590 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-scripts\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.661838 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s2gpr" event={"ID":"2cadec02-ee87-4bed-a039-d46a59f7e25f","Type":"ContainerStarted","Data":"911faef16a68b6fdb6fbbdaabc2a85ba5b50eb3f5213a3265919fbc67d897aa2"} Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.663622 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxbf\" (UniqueName: \"kubernetes.io/projected/60de1cc2-3d8e-445b-b882-14385d944a1b-kube-api-access-4nxbf\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.664871 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h74j4" event={"ID":"2d55ab4e-9dab-4fad-8eb6-d2685a59f417","Type":"ContainerStarted","Data":"a34fc49aca537a851dcdc1129957abada85f68fa99191e7c29eac8b69996dd15"} Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.671140 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60de1cc2-3d8e-445b-b882-14385d944a1b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.673567 4874 generic.go:334] "Generic (PLEG): container finished" podID="eca34a46-69f7-4e13-8392-04acc4ea650e" containerID="3ba4101d65ab301c3f3a66a2850065fbb946cb24ba7f848786c185c65d6e5e46" exitCode=0 Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.673633 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jftpf" event={"ID":"eca34a46-69f7-4e13-8392-04acc4ea650e","Type":"ContainerDied","Data":"3ba4101d65ab301c3f3a66a2850065fbb946cb24ba7f848786c185c65d6e5e46"} Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.691159 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fee4-account-create-update-n9l5c" event={"ID":"369b8b1e-f1a3-423d-ac03-03855b2ec5d1","Type":"ContainerStarted","Data":"54975fdd537ab0a08274bdb1e3f280e011c1e4cf97ee2a371b2f043ae641ed28"} Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.697776 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.697810 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b268ef66cc41404bbecd9c0f528b347f586997c429bddc87782c10962eb32faa/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.707403 4874 generic.go:334] "Generic (PLEG): container finished" podID="183c319d-de18-4198-bc81-7deedc9e9f35" containerID="e0a8ce786f6aa34a725dfd16864fdd4a01819d7f2955760100c57db379b90bca" exitCode=1 Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.707481 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6998947cb9-hr7zv" event={"ID":"183c319d-de18-4198-bc81-7deedc9e9f35","Type":"ContainerDied","Data":"e0a8ce786f6aa34a725dfd16864fdd4a01819d7f2955760100c57db379b90bca"} Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.707518 4874 scope.go:117] "RemoveContainer" containerID="8a0e7c44d91f3400b00068ed1486ce4dfddb08e967cc5110bfb934452d8202a3" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.711344 4874 scope.go:117] "RemoveContainer" containerID="e0a8ce786f6aa34a725dfd16864fdd4a01819d7f2955760100c57db379b90bca" Feb 17 16:25:46 crc kubenswrapper[4874]: E0217 16:25:46.711687 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6998947cb9-hr7zv_openstack(183c319d-de18-4198-bc81-7deedc9e9f35)\"" pod="openstack/heat-api-6998947cb9-hr7zv" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.720421 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-s2gpr" podStartSLOduration=3.720401329 podStartE2EDuration="3.720401329s" podCreationTimestamp="2026-02-17 16:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:46.694574018 +0000 UTC m=+1356.988962589" watchObservedRunningTime="2026-02-17 16:25:46.720401329 +0000 UTC m=+1357.014789890" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.757258 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.765228 4874 generic.go:334] "Generic (PLEG): container finished" podID="5937d61b-9735-4fa9-b8ab-7441f71d4728" containerID="58a99f2a14bf7c7d1028c5894c4bdd6fedb57ad94aba6b728d996341d690f3d3" exitCode=1 Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.765279 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" event={"ID":"5937d61b-9735-4fa9-b8ab-7441f71d4728","Type":"ContainerDied","Data":"58a99f2a14bf7c7d1028c5894c4bdd6fedb57ad94aba6b728d996341d690f3d3"} Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.766011 4874 scope.go:117] "RemoveContainer" containerID="58a99f2a14bf7c7d1028c5894c4bdd6fedb57ad94aba6b728d996341d690f3d3" Feb 17 16:25:46 crc kubenswrapper[4874]: E0217 16:25:46.766353 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-b8d6fcf6-4n78j_openstack(5937d61b-9735-4fa9-b8ab-7441f71d4728)\"" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.780292 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-h74j4" podStartSLOduration=3.777501023 podStartE2EDuration="3.777501023s" podCreationTimestamp="2026-02-17 16:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:46.719491187 +0000 UTC m=+1357.013879748" watchObservedRunningTime="2026-02-17 16:25:46.777501023 +0000 UTC m=+1357.071889584" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.848622 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41894387-c8f2-4994-9975-d3df0f7781ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41894387-c8f2-4994-9975-d3df0f7781ee\") pod \"glance-default-external-api-0\" (UID: \"60de1cc2-3d8e-445b-b882-14385d944a1b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.876357 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-b434-account-create-update-zrmkx" podStartSLOduration=3.875958168 podStartE2EDuration="3.875958168s" podCreationTimestamp="2026-02-17 16:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:46.784704659 +0000 UTC m=+1357.079093240" watchObservedRunningTime="2026-02-17 16:25:46.875958168 +0000 UTC m=+1357.170346729" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.916762 4874 scope.go:117] "RemoveContainer" containerID="adeb8ca065576287990a6531e06b15474eea03f1b392ba5d78085e11de2b12a9" Feb 17 16:25:46 crc kubenswrapper[4874]: I0217 16:25:46.941160 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0107-account-create-update-rhzxh" podStartSLOduration=3.9411353289999997 podStartE2EDuration="3.941135329s" podCreationTimestamp="2026-02-17 16:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:46.831902192 +0000 UTC m=+1357.126290753" watchObservedRunningTime="2026-02-17 16:25:46.941135329 +0000 UTC m=+1357.235523910" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.014417 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.666436 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.697220 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.779800 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerStarted","Data":"17c01737a97a23d3da4f5e39fdaf5c9f416cd59521eacfcb5c1f14bd8eb4c76f"} Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.790703 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-config-data\") pod \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.790827 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-internal-tls-certs\") pod \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.790854 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-httpd-run\") pod \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.790962 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.791162 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-scripts\") pod \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.791190 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldgb2\" (UniqueName: \"kubernetes.io/projected/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-kube-api-access-ldgb2\") pod \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.791223 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-logs\") pod \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.791285 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-combined-ca-bundle\") pod \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\" (UID: \"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f\") " Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.791394 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" (UID: "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.791873 4874 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.792565 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-logs" (OuterVolumeSpecName: "logs") pod "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" (UID: "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.798062 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-kube-api-access-ldgb2" (OuterVolumeSpecName: "kube-api-access-ldgb2") pod "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" (UID: "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f"). InnerVolumeSpecName "kube-api-access-ldgb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.802242 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-scripts" (OuterVolumeSpecName: "scripts") pod "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" (UID: "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.811223 4874 generic.go:334] "Generic (PLEG): container finished" podID="2d55ab4e-9dab-4fad-8eb6-d2685a59f417" containerID="a34fc49aca537a851dcdc1129957abada85f68fa99191e7c29eac8b69996dd15" exitCode=0 Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.811312 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h74j4" event={"ID":"2d55ab4e-9dab-4fad-8eb6-d2685a59f417","Type":"ContainerDied","Data":"a34fc49aca537a851dcdc1129957abada85f68fa99191e7c29eac8b69996dd15"} Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.831762 4874 generic.go:334] "Generic (PLEG): container finished" podID="8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204" containerID="4abe6950ede63a1b516707f61db758721b74e901fcf53508675e7f5c73f6a4c3" exitCode=0 Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.831874 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b434-account-create-update-zrmkx" event={"ID":"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204","Type":"ContainerDied","Data":"4abe6950ede63a1b516707f61db758721b74e901fcf53508675e7f5c73f6a4c3"} Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.844134 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29" (OuterVolumeSpecName: "glance") pod "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" (UID: "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f"). InnerVolumeSpecName "pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.846937 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" (UID: "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.849320 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"60de1cc2-3d8e-445b-b882-14385d944a1b","Type":"ContainerStarted","Data":"838c8dec689e0b6c06ec92871f445c5257f0bdec181237959e1eedac311008b4"} Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.868527 4874 generic.go:334] "Generic (PLEG): container finished" podID="a309e0fa-75e0-4d58-92cc-09a4dbf446d4" containerID="4370a27b806f2182276b9524d64e4cb3d96a7e9a9aaa747b643df6d603511494" exitCode=0 Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.868593 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0107-account-create-update-rhzxh" event={"ID":"a309e0fa-75e0-4d58-92cc-09a4dbf446d4","Type":"ContainerDied","Data":"4370a27b806f2182276b9524d64e4cb3d96a7e9a9aaa747b643df6d603511494"} Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.872737 4874 generic.go:334] "Generic (PLEG): container finished" podID="2cadec02-ee87-4bed-a039-d46a59f7e25f" containerID="911faef16a68b6fdb6fbbdaabc2a85ba5b50eb3f5213a3265919fbc67d897aa2" exitCode=0 Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.872871 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s2gpr" event={"ID":"2cadec02-ee87-4bed-a039-d46a59f7e25f","Type":"ContainerDied","Data":"911faef16a68b6fdb6fbbdaabc2a85ba5b50eb3f5213a3265919fbc67d897aa2"} Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.874834 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb1babe8-fc1e-42fe-ad26-3c627c6bc73f","Type":"ContainerDied","Data":"2348b160e3c21f44b17a14ecbb385c58129294adb0b5d2b6739f0d3605997206"} Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.874860 4874 scope.go:117] "RemoveContainer" containerID="a101e4938d0685428284db4ed1f088160a322daf589c50dcb3efe6ef955984f2" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.874943 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.884630 4874 scope.go:117] "RemoveContainer" containerID="58a99f2a14bf7c7d1028c5894c4bdd6fedb57ad94aba6b728d996341d690f3d3" Feb 17 16:25:47 crc kubenswrapper[4874]: E0217 16:25:47.884931 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-b8d6fcf6-4n78j_openstack(5937d61b-9735-4fa9-b8ab-7441f71d4728)\"" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.893975 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.894007 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldgb2\" (UniqueName: \"kubernetes.io/projected/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-kube-api-access-ldgb2\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.894021 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.894035 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.895518 4874 generic.go:334] "Generic (PLEG): container finished" podID="369b8b1e-f1a3-423d-ac03-03855b2ec5d1" containerID="5d605fb9a1b09282e996a591016433a7a47a39f0390c461aa50d7c8fe1952ab4" exitCode=0 Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.895599 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fee4-account-create-update-n9l5c" event={"ID":"369b8b1e-f1a3-423d-ac03-03855b2ec5d1","Type":"ContainerDied","Data":"5d605fb9a1b09282e996a591016433a7a47a39f0390c461aa50d7c8fe1952ab4"} Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.901672 4874 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") on node \"crc\" " Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.929687 4874 scope.go:117] "RemoveContainer" containerID="e0a8ce786f6aa34a725dfd16864fdd4a01819d7f2955760100c57db379b90bca" Feb 17 16:25:47 crc kubenswrapper[4874]: E0217 16:25:47.932812 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6998947cb9-hr7zv_openstack(183c319d-de18-4198-bc81-7deedc9e9f35)\"" pod="openstack/heat-api-6998947cb9-hr7zv" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.933863 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" (UID: "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:47 crc kubenswrapper[4874]: I0217 16:25:47.947135 4874 scope.go:117] "RemoveContainer" containerID="955b09375e4d1b05269a4a63a1baf8be8a1f1e4d8f5cb5b200dc59a9a2f74b3a" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.000358 4874 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.000525 4874 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29") on node "crc" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.005294 4874 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.005327 4874 reconciler_common.go:293] "Volume detached for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.005321 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-config-data" (OuterVolumeSpecName: "config-data") pod "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" (UID: "eb1babe8-fc1e-42fe-ad26-3c627c6bc73f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.108992 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.258437 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.263021 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.309709 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.331335 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:25:48 crc kubenswrapper[4874]: E0217 16:25:48.332056 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-log" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.332182 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-log" Feb 17 16:25:48 crc kubenswrapper[4874]: E0217 16:25:48.332206 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-httpd" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.332215 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-httpd" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.332611 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-log" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.332636 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" containerName="glance-httpd" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.340569 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.350468 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.351172 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.351310 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.479771 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb1babe8-fc1e-42fe-ad26-3c627c6bc73f" path="/var/lib/kubelet/pods/eb1babe8-fc1e-42fe-ad26-3c627c6bc73f/volumes" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.519950 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.520011 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.520149 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.520199 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa0847fc-7f03-4cfe-a655-7abf45945a22-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.520230 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.520297 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plms4\" (UniqueName: \"kubernetes.io/projected/aa0847fc-7f03-4cfe-a655-7abf45945a22-kube-api-access-plms4\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.520351 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.520396 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa0847fc-7f03-4cfe-a655-7abf45945a22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.550276 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.622201 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmjbh\" (UniqueName: \"kubernetes.io/projected/eca34a46-69f7-4e13-8392-04acc4ea650e-kube-api-access-dmjbh\") pod \"eca34a46-69f7-4e13-8392-04acc4ea650e\" (UID: \"eca34a46-69f7-4e13-8392-04acc4ea650e\") " Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.622277 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca34a46-69f7-4e13-8392-04acc4ea650e-operator-scripts\") pod \"eca34a46-69f7-4e13-8392-04acc4ea650e\" (UID: \"eca34a46-69f7-4e13-8392-04acc4ea650e\") " Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.622478 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.622502 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.623172 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca34a46-69f7-4e13-8392-04acc4ea650e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eca34a46-69f7-4e13-8392-04acc4ea650e" (UID: "eca34a46-69f7-4e13-8392-04acc4ea650e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.623657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.624324 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa0847fc-7f03-4cfe-a655-7abf45945a22-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.624397 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.624575 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plms4\" (UniqueName: \"kubernetes.io/projected/aa0847fc-7f03-4cfe-a655-7abf45945a22-kube-api-access-plms4\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.624702 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.624781 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa0847fc-7f03-4cfe-a655-7abf45945a22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.624882 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa0847fc-7f03-4cfe-a655-7abf45945a22-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.624942 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eca34a46-69f7-4e13-8392-04acc4ea650e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.625371 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.625393 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1d818c0b3c780ebc1f2ad700eba392c18b331a7a76e8a2b3fde68119e55723e/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.626389 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa0847fc-7f03-4cfe-a655-7abf45945a22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.628953 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca34a46-69f7-4e13-8392-04acc4ea650e-kube-api-access-dmjbh" (OuterVolumeSpecName: "kube-api-access-dmjbh") pod "eca34a46-69f7-4e13-8392-04acc4ea650e" (UID: "eca34a46-69f7-4e13-8392-04acc4ea650e"). InnerVolumeSpecName "kube-api-access-dmjbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.629273 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.631159 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.633744 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.635487 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa0847fc-7f03-4cfe-a655-7abf45945a22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.642924 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plms4\" (UniqueName: \"kubernetes.io/projected/aa0847fc-7f03-4cfe-a655-7abf45945a22-kube-api-access-plms4\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.688418 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b0812c4-deb5-4231-bebd-dfb9b7d89d29\") pod \"glance-default-internal-api-0\" (UID: \"aa0847fc-7f03-4cfe-a655-7abf45945a22\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.726460 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmjbh\" (UniqueName: \"kubernetes.io/projected/eca34a46-69f7-4e13-8392-04acc4ea650e-kube-api-access-dmjbh\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.943646 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jftpf" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.944143 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jftpf" event={"ID":"eca34a46-69f7-4e13-8392-04acc4ea650e","Type":"ContainerDied","Data":"fc5ffa55229f3b260c2ce00666fce78efeec0262a42fbc4f19e3c19d32185028"} Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.944177 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc5ffa55229f3b260c2ce00666fce78efeec0262a42fbc4f19e3c19d32185028" Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.951251 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerStarted","Data":"b6ae2fb0bf5c7138f685b51ac6ee559d651c3c8fba3a7ce0f6a0fa916e7081fb"} Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.970019 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"60de1cc2-3d8e-445b-b882-14385d944a1b","Type":"ContainerStarted","Data":"0468382c71fa8bc877a0b55ce5b779b0ed3378897575cc8e1cd50ea7356617d2"} Feb 17 16:25:48 crc kubenswrapper[4874]: I0217 16:25:48.978871 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.458179 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.563600 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-operator-scripts\") pod \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\" (UID: \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\") " Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.563729 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnkz9\" (UniqueName: \"kubernetes.io/projected/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-kube-api-access-wnkz9\") pod \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\" (UID: \"2d55ab4e-9dab-4fad-8eb6-d2685a59f417\") " Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.567788 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d55ab4e-9dab-4fad-8eb6-d2685a59f417" (UID: "2d55ab4e-9dab-4fad-8eb6-d2685a59f417"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.608024 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-kube-api-access-wnkz9" (OuterVolumeSpecName: "kube-api-access-wnkz9") pod "2d55ab4e-9dab-4fad-8eb6-d2685a59f417" (UID: "2d55ab4e-9dab-4fad-8eb6-d2685a59f417"). InnerVolumeSpecName "kube-api-access-wnkz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.668880 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.668930 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnkz9\" (UniqueName: \"kubernetes.io/projected/2d55ab4e-9dab-4fad-8eb6-d2685a59f417-kube-api-access-wnkz9\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.734897 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.778871 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-operator-scripts\") pod \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\" (UID: \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\") " Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.778982 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wljt\" (UniqueName: \"kubernetes.io/projected/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-kube-api-access-8wljt\") pod \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\" (UID: \"a309e0fa-75e0-4d58-92cc-09a4dbf446d4\") " Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.795515 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a309e0fa-75e0-4d58-92cc-09a4dbf446d4" (UID: "a309e0fa-75e0-4d58-92cc-09a4dbf446d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.795682 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-kube-api-access-8wljt" (OuterVolumeSpecName: "kube-api-access-8wljt") pod "a309e0fa-75e0-4d58-92cc-09a4dbf446d4" (UID: "a309e0fa-75e0-4d58-92cc-09a4dbf446d4"). InnerVolumeSpecName "kube-api-access-8wljt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.882241 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.882476 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wljt\" (UniqueName: \"kubernetes.io/projected/a309e0fa-75e0-4d58-92cc-09a4dbf446d4-kube-api-access-8wljt\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.982047 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0107-account-create-update-rhzxh" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.982855 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0107-account-create-update-rhzxh" event={"ID":"a309e0fa-75e0-4d58-92cc-09a4dbf446d4","Type":"ContainerDied","Data":"a26b9134358b1299ca379991fec0dbd83aa8361ca56ad06dd4e4e94b49fa400c"} Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.982903 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a26b9134358b1299ca379991fec0dbd83aa8361ca56ad06dd4e4e94b49fa400c" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.988633 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerStarted","Data":"280c70b4d8422938f6652a20e417cc362bbf94fe7d9feedd7e433ca2ff98917e"} Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.989993 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h74j4" event={"ID":"2d55ab4e-9dab-4fad-8eb6-d2685a59f417","Type":"ContainerDied","Data":"87f37962dda4a55bb76d9ea59a0032e72085685fb36fab268dccb3ca79263d51"} Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.990087 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f37962dda4a55bb76d9ea59a0032e72085685fb36fab268dccb3ca79263d51" Feb 17 16:25:49 crc kubenswrapper[4874]: I0217 16:25:49.990209 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h74j4" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.381760 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.394239 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.398858 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk8dv\" (UniqueName: \"kubernetes.io/projected/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-kube-api-access-xk8dv\") pod \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\" (UID: \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\") " Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.398901 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z27wj\" (UniqueName: \"kubernetes.io/projected/2cadec02-ee87-4bed-a039-d46a59f7e25f-kube-api-access-z27wj\") pod \"2cadec02-ee87-4bed-a039-d46a59f7e25f\" (UID: \"2cadec02-ee87-4bed-a039-d46a59f7e25f\") " Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.398935 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-operator-scripts\") pod \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\" (UID: \"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204\") " Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.399000 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cadec02-ee87-4bed-a039-d46a59f7e25f-operator-scripts\") pod \"2cadec02-ee87-4bed-a039-d46a59f7e25f\" (UID: \"2cadec02-ee87-4bed-a039-d46a59f7e25f\") " Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.399992 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cadec02-ee87-4bed-a039-d46a59f7e25f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cadec02-ee87-4bed-a039-d46a59f7e25f" (UID: "2cadec02-ee87-4bed-a039-d46a59f7e25f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.408261 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.416303 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cadec02-ee87-4bed-a039-d46a59f7e25f-kube-api-access-z27wj" (OuterVolumeSpecName: "kube-api-access-z27wj") pod "2cadec02-ee87-4bed-a039-d46a59f7e25f" (UID: "2cadec02-ee87-4bed-a039-d46a59f7e25f"). InnerVolumeSpecName "kube-api-access-z27wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.500655 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cadec02-ee87-4bed-a039-d46a59f7e25f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.500686 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z27wj\" (UniqueName: \"kubernetes.io/projected/2cadec02-ee87-4bed-a039-d46a59f7e25f-kube-api-access-z27wj\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.534886 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.575696 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204" (UID: "8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.584759 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-kube-api-access-xk8dv" (OuterVolumeSpecName: "kube-api-access-xk8dv") pod "8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204" (UID: "8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204"). InnerVolumeSpecName "kube-api-access-xk8dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.602631 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp268\" (UniqueName: \"kubernetes.io/projected/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-kube-api-access-gp268\") pod \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\" (UID: \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\") " Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.602975 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-operator-scripts\") pod \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\" (UID: \"369b8b1e-f1a3-423d-ac03-03855b2ec5d1\") " Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.603577 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk8dv\" (UniqueName: \"kubernetes.io/projected/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-kube-api-access-xk8dv\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.603604 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.607468 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "369b8b1e-f1a3-423d-ac03-03855b2ec5d1" (UID: "369b8b1e-f1a3-423d-ac03-03855b2ec5d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.618984 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-kube-api-access-gp268" (OuterVolumeSpecName: "kube-api-access-gp268") pod "369b8b1e-f1a3-423d-ac03-03855b2ec5d1" (UID: "369b8b1e-f1a3-423d-ac03-03855b2ec5d1"). InnerVolumeSpecName "kube-api-access-gp268". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.705342 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp268\" (UniqueName: \"kubernetes.io/projected/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-kube-api-access-gp268\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:50 crc kubenswrapper[4874]: I0217 16:25:50.705662 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369b8b1e-f1a3-423d-ac03-03855b2ec5d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.000668 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fee4-account-create-update-n9l5c" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.000676 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fee4-account-create-update-n9l5c" event={"ID":"369b8b1e-f1a3-423d-ac03-03855b2ec5d1","Type":"ContainerDied","Data":"54975fdd537ab0a08274bdb1e3f280e011c1e4cf97ee2a371b2f043ae641ed28"} Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.000711 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54975fdd537ab0a08274bdb1e3f280e011c1e4cf97ee2a371b2f043ae641ed28" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.002340 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s2gpr" event={"ID":"2cadec02-ee87-4bed-a039-d46a59f7e25f","Type":"ContainerDied","Data":"8a190339b26d15b1f4eac65b681f83dadfa91d9a74278093493ce970f48a29d1"} Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.002363 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a190339b26d15b1f4eac65b681f83dadfa91d9a74278093493ce970f48a29d1" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.002427 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s2gpr" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.004821 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b434-account-create-update-zrmkx" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.004812 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b434-account-create-update-zrmkx" event={"ID":"8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204","Type":"ContainerDied","Data":"4730acaafbf30763771a68acea3f5d45afc7f4dbb1d821401df5f92b4d6a9c8b"} Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.004931 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4730acaafbf30763771a68acea3f5d45afc7f4dbb1d821401df5f92b4d6a9c8b" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.006909 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"60de1cc2-3d8e-445b-b882-14385d944a1b","Type":"ContainerStarted","Data":"01cea0d4732c91069a6e7aee1c9d017011fec74854378c146256b063af767464"} Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.008064 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa0847fc-7f03-4cfe-a655-7abf45945a22","Type":"ContainerStarted","Data":"510372c3be9677734b3e916742a7fe0c468246742047f447db3df4139070eb68"} Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.171946 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.172891 4874 scope.go:117] "RemoveContainer" containerID="58a99f2a14bf7c7d1028c5894c4bdd6fedb57ad94aba6b728d996341d690f3d3" Feb 17 16:25:51 crc kubenswrapper[4874]: E0217 16:25:51.173273 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-b8d6fcf6-4n78j_openstack(5937d61b-9735-4fa9-b8ab-7441f71d4728)\"" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.210722 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.210770 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.211572 4874 scope.go:117] "RemoveContainer" containerID="e0a8ce786f6aa34a725dfd16864fdd4a01819d7f2955760100c57db379b90bca" Feb 17 16:25:51 crc kubenswrapper[4874]: E0217 16:25:51.211826 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6998947cb9-hr7zv_openstack(183c319d-de18-4198-bc81-7deedc9e9f35)\"" pod="openstack/heat-api-6998947cb9-hr7zv" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.437901 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.442920 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-684fb5885c-hr4m8" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.582420 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.591974 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6998947cb9-hr7zv"] Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.723605 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-77f9b8d4df-5ptz7" Feb 17 16:25:51 crc kubenswrapper[4874]: I0217 16:25:51.781646 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-b8d6fcf6-4n78j"] Feb 17 16:25:52 crc kubenswrapper[4874]: I0217 16:25:52.419967 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:52 crc kubenswrapper[4874]: I0217 16:25:52.421600 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-558b9bddc9-tks6t" Feb 17 16:25:52 crc kubenswrapper[4874]: I0217 16:25:52.424006 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa0847fc-7f03-4cfe-a655-7abf45945a22","Type":"ContainerStarted","Data":"f87563b224be4dee8179280468bcc4bc7271a49de12d0d8d08b0893a8b93b7aa"} Feb 17 16:25:52 crc kubenswrapper[4874]: I0217 16:25:52.727320 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.727295927 podStartE2EDuration="6.727295927s" podCreationTimestamp="2026-02-17 16:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:52.496019858 +0000 UTC m=+1362.790408419" watchObservedRunningTime="2026-02-17 16:25:52.727295927 +0000 UTC m=+1363.021684488" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.222287 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.350259 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8hx5\" (UniqueName: \"kubernetes.io/projected/5937d61b-9735-4fa9-b8ab-7441f71d4728-kube-api-access-p8hx5\") pod \"5937d61b-9735-4fa9-b8ab-7441f71d4728\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.350410 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data-custom\") pod \"5937d61b-9735-4fa9-b8ab-7441f71d4728\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.350570 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data\") pod \"5937d61b-9735-4fa9-b8ab-7441f71d4728\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.350595 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-combined-ca-bundle\") pod \"5937d61b-9735-4fa9-b8ab-7441f71d4728\" (UID: \"5937d61b-9735-4fa9-b8ab-7441f71d4728\") " Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.358485 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5937d61b-9735-4fa9-b8ab-7441f71d4728" (UID: "5937d61b-9735-4fa9-b8ab-7441f71d4728"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.358654 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.365395 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5937d61b-9735-4fa9-b8ab-7441f71d4728-kube-api-access-p8hx5" (OuterVolumeSpecName: "kube-api-access-p8hx5") pod "5937d61b-9735-4fa9-b8ab-7441f71d4728" (UID: "5937d61b-9735-4fa9-b8ab-7441f71d4728"). InnerVolumeSpecName "kube-api-access-p8hx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.407107 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5937d61b-9735-4fa9-b8ab-7441f71d4728" (UID: "5937d61b-9735-4fa9-b8ab-7441f71d4728"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.447442 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerStarted","Data":"9b3bcd14d71c8cea56a1be6015dfa14a18fc68d9695f004bb8a223cd908e9c46"} Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.452801 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kbtt\" (UniqueName: \"kubernetes.io/projected/183c319d-de18-4198-bc81-7deedc9e9f35-kube-api-access-5kbtt\") pod \"183c319d-de18-4198-bc81-7deedc9e9f35\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.453196 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-combined-ca-bundle\") pod \"183c319d-de18-4198-bc81-7deedc9e9f35\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.453231 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data-custom\") pod \"183c319d-de18-4198-bc81-7deedc9e9f35\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.453310 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data\") pod \"183c319d-de18-4198-bc81-7deedc9e9f35\" (UID: \"183c319d-de18-4198-bc81-7deedc9e9f35\") " Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.454114 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8hx5\" (UniqueName: \"kubernetes.io/projected/5937d61b-9735-4fa9-b8ab-7441f71d4728-kube-api-access-p8hx5\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.454133 4874 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.454145 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.454522 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" event={"ID":"5937d61b-9735-4fa9-b8ab-7441f71d4728","Type":"ContainerDied","Data":"968ec99c325f3c41cbdd00e6de27b96572339c10c5c5e5ce9b7b44770db2a852"} Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.454571 4874 scope.go:117] "RemoveContainer" containerID="58a99f2a14bf7c7d1028c5894c4bdd6fedb57ad94aba6b728d996341d690f3d3" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.454687 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b8d6fcf6-4n78j" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.462448 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6998947cb9-hr7zv" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.463482 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6998947cb9-hr7zv" event={"ID":"183c319d-de18-4198-bc81-7deedc9e9f35","Type":"ContainerDied","Data":"33b3282997a4cb6f81c45368f7833c6bc3e0a3008fe4990c4a5b2b2895e4bb99"} Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.465866 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183c319d-de18-4198-bc81-7deedc9e9f35-kube-api-access-5kbtt" (OuterVolumeSpecName: "kube-api-access-5kbtt") pod "183c319d-de18-4198-bc81-7deedc9e9f35" (UID: "183c319d-de18-4198-bc81-7deedc9e9f35"). InnerVolumeSpecName "kube-api-access-5kbtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.479303 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "183c319d-de18-4198-bc81-7deedc9e9f35" (UID: "183c319d-de18-4198-bc81-7deedc9e9f35"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.512306 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data" (OuterVolumeSpecName: "config-data") pod "5937d61b-9735-4fa9-b8ab-7441f71d4728" (UID: "5937d61b-9735-4fa9-b8ab-7441f71d4728"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.554868 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "183c319d-de18-4198-bc81-7deedc9e9f35" (UID: "183c319d-de18-4198-bc81-7deedc9e9f35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.558659 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5937d61b-9735-4fa9-b8ab-7441f71d4728-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.558695 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kbtt\" (UniqueName: \"kubernetes.io/projected/183c319d-de18-4198-bc81-7deedc9e9f35-kube-api-access-5kbtt\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.558706 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.558722 4874 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.619320 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data" (OuterVolumeSpecName: "config-data") pod "183c319d-de18-4198-bc81-7deedc9e9f35" (UID: "183c319d-de18-4198-bc81-7deedc9e9f35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.661418 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183c319d-de18-4198-bc81-7deedc9e9f35-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.725061 4874 scope.go:117] "RemoveContainer" containerID="e0a8ce786f6aa34a725dfd16864fdd4a01819d7f2955760100c57db379b90bca" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.811709 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-b8d6fcf6-4n78j"] Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.822746 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-b8d6fcf6-4n78j"] Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.833140 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6998947cb9-hr7zv"] Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.840705 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6998947cb9-hr7zv"] Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.899821 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ml2rb"] Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907340 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d55ab4e-9dab-4fad-8eb6-d2685a59f417" containerName="mariadb-database-create" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907373 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d55ab4e-9dab-4fad-8eb6-d2685a59f417" containerName="mariadb-database-create" Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907386 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" containerName="heat-api" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907393 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" containerName="heat-api" Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907403 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a309e0fa-75e0-4d58-92cc-09a4dbf446d4" containerName="mariadb-account-create-update" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907409 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="a309e0fa-75e0-4d58-92cc-09a4dbf446d4" containerName="mariadb-account-create-update" Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907424 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369b8b1e-f1a3-423d-ac03-03855b2ec5d1" containerName="mariadb-account-create-update" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907431 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="369b8b1e-f1a3-423d-ac03-03855b2ec5d1" containerName="mariadb-account-create-update" Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907459 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" containerName="heat-cfnapi" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907465 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" containerName="heat-cfnapi" Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907480 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" containerName="heat-api" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907486 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" containerName="heat-api" Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907500 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204" containerName="mariadb-account-create-update" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907506 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204" containerName="mariadb-account-create-update" Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907516 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca34a46-69f7-4e13-8392-04acc4ea650e" containerName="mariadb-database-create" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907521 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca34a46-69f7-4e13-8392-04acc4ea650e" containerName="mariadb-database-create" Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907536 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cadec02-ee87-4bed-a039-d46a59f7e25f" containerName="mariadb-database-create" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907542 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cadec02-ee87-4bed-a039-d46a59f7e25f" containerName="mariadb-database-create" Feb 17 16:25:53 crc kubenswrapper[4874]: E0217 16:25:53.907555 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" containerName="heat-cfnapi" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.907561 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" containerName="heat-cfnapi" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908249 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="a309e0fa-75e0-4d58-92cc-09a4dbf446d4" containerName="mariadb-account-create-update" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908272 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca34a46-69f7-4e13-8392-04acc4ea650e" containerName="mariadb-database-create" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908290 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="369b8b1e-f1a3-423d-ac03-03855b2ec5d1" containerName="mariadb-account-create-update" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908312 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cadec02-ee87-4bed-a039-d46a59f7e25f" containerName="mariadb-database-create" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908328 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" containerName="heat-api" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908344 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204" containerName="mariadb-account-create-update" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908356 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" containerName="heat-api" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908374 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" containerName="heat-cfnapi" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908384 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d55ab4e-9dab-4fad-8eb6-d2685a59f417" containerName="mariadb-database-create" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.908397 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" containerName="heat-cfnapi" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.909548 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.914355 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.914399 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-t5tn4" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.915036 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 16:25:53 crc kubenswrapper[4874]: I0217 16:25:53.917582 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ml2rb"] Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.078607 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-scripts\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.078677 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.078767 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jfdm\" (UniqueName: \"kubernetes.io/projected/4327f121-2ddc-4367-9055-17c7fe4d855e-kube-api-access-6jfdm\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.078844 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-config-data\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.181302 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-scripts\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.181371 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.181456 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jfdm\" (UniqueName: \"kubernetes.io/projected/4327f121-2ddc-4367-9055-17c7fe4d855e-kube-api-access-6jfdm\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.181525 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-config-data\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.188877 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-config-data\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.189479 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.194539 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-scripts\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.209207 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jfdm\" (UniqueName: \"kubernetes.io/projected/4327f121-2ddc-4367-9055-17c7fe4d855e-kube-api-access-6jfdm\") pod \"nova-cell0-conductor-db-sync-ml2rb\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.246141 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.480367 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183c319d-de18-4198-bc81-7deedc9e9f35" path="/var/lib/kubelet/pods/183c319d-de18-4198-bc81-7deedc9e9f35/volumes" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.481040 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5937d61b-9735-4fa9-b8ab-7441f71d4728" path="/var/lib/kubelet/pods/5937d61b-9735-4fa9-b8ab-7441f71d4728/volumes" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.481690 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa0847fc-7f03-4cfe-a655-7abf45945a22","Type":"ContainerStarted","Data":"35def669918dfed8e7bccb7c2c6ce047bef5a23f585c41dfce2da0fd711d6a3b"} Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.523876 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.523828177 podStartE2EDuration="6.523828177s" podCreationTimestamp="2026-02-17 16:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:54.500334283 +0000 UTC m=+1364.794722844" watchObservedRunningTime="2026-02-17 16:25:54.523828177 +0000 UTC m=+1364.818216738" Feb 17 16:25:54 crc kubenswrapper[4874]: I0217 16:25:54.989157 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ml2rb"] Feb 17 16:25:55 crc kubenswrapper[4874]: W0217 16:25:55.071497 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3 WatchSource:0}: Error finding container 4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3: Status 404 returned error can't find the container with id 4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3 Feb 17 16:25:55 crc kubenswrapper[4874]: I0217 16:25:55.497795 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ml2rb" event={"ID":"4327f121-2ddc-4367-9055-17c7fe4d855e","Type":"ContainerStarted","Data":"4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3"} Feb 17 16:25:56 crc kubenswrapper[4874]: I0217 16:25:56.185151 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-59c46f7ffb-7jfhs" Feb 17 16:25:56 crc kubenswrapper[4874]: I0217 16:25:56.242450 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-77965974bf-qbtfj"] Feb 17 16:25:56 crc kubenswrapper[4874]: I0217 16:25:56.242663 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-77965974bf-qbtfj" podUID="ce22ccd7-e053-4795-bf35-e1021cfeff9d" containerName="heat-engine" containerID="cri-o://daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78" gracePeriod=60 Feb 17 16:25:57 crc kubenswrapper[4874]: I0217 16:25:57.014969 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:25:57 crc kubenswrapper[4874]: I0217 16:25:57.015314 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:25:57 crc kubenswrapper[4874]: I0217 16:25:57.056882 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:25:57 crc kubenswrapper[4874]: I0217 16:25:57.068745 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:25:57 crc kubenswrapper[4874]: I0217 16:25:57.525508 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerStarted","Data":"3ac6821f489386cf3c59e1a3d28a18ae8a9b9407ebc27f9f3e4246df3ca3f1f0"} Feb 17 16:25:57 crc kubenswrapper[4874]: I0217 16:25:57.525988 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:25:57 crc kubenswrapper[4874]: I0217 16:25:57.526493 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:25:57 crc kubenswrapper[4874]: I0217 16:25:57.547716 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.734708923 podStartE2EDuration="12.547699646s" podCreationTimestamp="2026-02-17 16:25:45 +0000 UTC" firstStartedPulling="2026-02-17 16:25:46.776796126 +0000 UTC m=+1357.071184687" lastFinishedPulling="2026-02-17 16:25:56.589786849 +0000 UTC m=+1366.884175410" observedRunningTime="2026-02-17 16:25:57.54459059 +0000 UTC m=+1367.838979161" watchObservedRunningTime="2026-02-17 16:25:57.547699646 +0000 UTC m=+1367.842088197" Feb 17 16:25:58 crc kubenswrapper[4874]: E0217 16:25:58.161715 4874 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:25:58 crc kubenswrapper[4874]: E0217 16:25:58.167903 4874 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:25:58 crc kubenswrapper[4874]: E0217 16:25:58.169736 4874 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:25:58 crc kubenswrapper[4874]: E0217 16:25:58.169818 4874 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-77965974bf-qbtfj" podUID="ce22ccd7-e053-4795-bf35-e1021cfeff9d" containerName="heat-engine" Feb 17 16:25:58 crc kubenswrapper[4874]: I0217 16:25:58.542426 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:25:58 crc kubenswrapper[4874]: I0217 16:25:58.979960 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:58 crc kubenswrapper[4874]: I0217 16:25:58.980100 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:59 crc kubenswrapper[4874]: I0217 16:25:59.035210 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:59 crc kubenswrapper[4874]: I0217 16:25:59.049928 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:59 crc kubenswrapper[4874]: I0217 16:25:59.577841 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:25:59 crc kubenswrapper[4874]: I0217 16:25:59.578199 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:25:59 crc kubenswrapper[4874]: I0217 16:25:59.578835 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:59 crc kubenswrapper[4874]: I0217 16:25:59.578993 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:26:00 crc kubenswrapper[4874]: I0217 16:26:00.997120 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:26:00 crc kubenswrapper[4874]: I0217 16:26:00.997595 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:26:01 crc kubenswrapper[4874]: I0217 16:26:01.292617 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:26:02 crc kubenswrapper[4874]: I0217 16:26:02.126348 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:26:02 crc kubenswrapper[4874]: I0217 16:26:02.127449 4874 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:26:02 crc kubenswrapper[4874]: I0217 16:26:02.489488 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.362354 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.362942 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="ceilometer-central-agent" containerID="cri-o://b6ae2fb0bf5c7138f685b51ac6ee559d651c3c8fba3a7ce0f6a0fa916e7081fb" gracePeriod=30 Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.363420 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="proxy-httpd" containerID="cri-o://3ac6821f489386cf3c59e1a3d28a18ae8a9b9407ebc27f9f3e4246df3ca3f1f0" gracePeriod=30 Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.363464 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="sg-core" containerID="cri-o://9b3bcd14d71c8cea56a1be6015dfa14a18fc68d9695f004bb8a223cd908e9c46" gracePeriod=30 Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.363431 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="ceilometer-notification-agent" containerID="cri-o://280c70b4d8422938f6652a20e417cc362bbf94fe7d9feedd7e433ca2ff98917e" gracePeriod=30 Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.634993 4874 generic.go:334] "Generic (PLEG): container finished" podID="ce22ccd7-e053-4795-bf35-e1021cfeff9d" containerID="daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78" exitCode=0 Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.635390 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77965974bf-qbtfj" event={"ID":"ce22ccd7-e053-4795-bf35-e1021cfeff9d","Type":"ContainerDied","Data":"daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78"} Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.639169 4874 generic.go:334] "Generic (PLEG): container finished" podID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerID="3ac6821f489386cf3c59e1a3d28a18ae8a9b9407ebc27f9f3e4246df3ca3f1f0" exitCode=0 Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.639202 4874 generic.go:334] "Generic (PLEG): container finished" podID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerID="9b3bcd14d71c8cea56a1be6015dfa14a18fc68d9695f004bb8a223cd908e9c46" exitCode=2 Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.639223 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerDied","Data":"3ac6821f489386cf3c59e1a3d28a18ae8a9b9407ebc27f9f3e4246df3ca3f1f0"} Feb 17 16:26:03 crc kubenswrapper[4874]: I0217 16:26:03.639246 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerDied","Data":"9b3bcd14d71c8cea56a1be6015dfa14a18fc68d9695f004bb8a223cd908e9c46"} Feb 17 16:26:04 crc kubenswrapper[4874]: I0217 16:26:04.653471 4874 generic.go:334] "Generic (PLEG): container finished" podID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerID="280c70b4d8422938f6652a20e417cc362bbf94fe7d9feedd7e433ca2ff98917e" exitCode=0 Feb 17 16:26:04 crc kubenswrapper[4874]: I0217 16:26:04.653730 4874 generic.go:334] "Generic (PLEG): container finished" podID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerID="b6ae2fb0bf5c7138f685b51ac6ee559d651c3c8fba3a7ce0f6a0fa916e7081fb" exitCode=0 Feb 17 16:26:04 crc kubenswrapper[4874]: I0217 16:26:04.653751 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerDied","Data":"280c70b4d8422938f6652a20e417cc362bbf94fe7d9feedd7e433ca2ff98917e"} Feb 17 16:26:04 crc kubenswrapper[4874]: I0217 16:26:04.653777 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerDied","Data":"b6ae2fb0bf5c7138f685b51ac6ee559d651c3c8fba3a7ce0f6a0fa916e7081fb"} Feb 17 16:26:08 crc kubenswrapper[4874]: E0217 16:26:08.162419 4874 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78 is running failed: container process not found" containerID="daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:26:08 crc kubenswrapper[4874]: E0217 16:26:08.163344 4874 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78 is running failed: container process not found" containerID="daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:26:08 crc kubenswrapper[4874]: E0217 16:26:08.163971 4874 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78 is running failed: container process not found" containerID="daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:26:08 crc kubenswrapper[4874]: E0217 16:26:08.164058 4874 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-77965974bf-qbtfj" podUID="ce22ccd7-e053-4795-bf35-e1021cfeff9d" containerName="heat-engine" Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.726636 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ml2rb" event={"ID":"4327f121-2ddc-4367-9055-17c7fe4d855e","Type":"ContainerStarted","Data":"c631b19a947ff54ab119434427ae2f287f61f94f7785ee3ed680e0357a464f44"} Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.762288 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ml2rb" podStartSLOduration=2.502200563 podStartE2EDuration="15.762267812s" podCreationTimestamp="2026-02-17 16:25:53 +0000 UTC" firstStartedPulling="2026-02-17 16:25:55.0914077 +0000 UTC m=+1365.385796261" lastFinishedPulling="2026-02-17 16:26:08.351474949 +0000 UTC m=+1378.645863510" observedRunningTime="2026-02-17 16:26:08.748627489 +0000 UTC m=+1379.043016060" watchObservedRunningTime="2026-02-17 16:26:08.762267812 +0000 UTC m=+1379.056656373" Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.821821 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.904852 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.966436 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data\") pod \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.966511 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gdsh\" (UniqueName: \"kubernetes.io/projected/ce22ccd7-e053-4795-bf35-e1021cfeff9d-kube-api-access-2gdsh\") pod \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.966598 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-combined-ca-bundle\") pod \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.966766 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data-custom\") pod \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\" (UID: \"ce22ccd7-e053-4795-bf35-e1021cfeff9d\") " Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.978907 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce22ccd7-e053-4795-bf35-e1021cfeff9d-kube-api-access-2gdsh" (OuterVolumeSpecName: "kube-api-access-2gdsh") pod "ce22ccd7-e053-4795-bf35-e1021cfeff9d" (UID: "ce22ccd7-e053-4795-bf35-e1021cfeff9d"). InnerVolumeSpecName "kube-api-access-2gdsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:08 crc kubenswrapper[4874]: I0217 16:26:08.981348 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ce22ccd7-e053-4795-bf35-e1021cfeff9d" (UID: "ce22ccd7-e053-4795-bf35-e1021cfeff9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.021196 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce22ccd7-e053-4795-bf35-e1021cfeff9d" (UID: "ce22ccd7-e053-4795-bf35-e1021cfeff9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.071022 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-scripts\") pod \"de75d382-99ba-4a94-8ab6-036d9fa19281\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.071393 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-combined-ca-bundle\") pod \"de75d382-99ba-4a94-8ab6-036d9fa19281\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.071559 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-log-httpd\") pod \"de75d382-99ba-4a94-8ab6-036d9fa19281\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.071747 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-run-httpd\") pod \"de75d382-99ba-4a94-8ab6-036d9fa19281\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.071833 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snpmd\" (UniqueName: \"kubernetes.io/projected/de75d382-99ba-4a94-8ab6-036d9fa19281-kube-api-access-snpmd\") pod \"de75d382-99ba-4a94-8ab6-036d9fa19281\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.071922 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-sg-core-conf-yaml\") pod \"de75d382-99ba-4a94-8ab6-036d9fa19281\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.072161 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-config-data\") pod \"de75d382-99ba-4a94-8ab6-036d9fa19281\" (UID: \"de75d382-99ba-4a94-8ab6-036d9fa19281\") " Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.072749 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gdsh\" (UniqueName: \"kubernetes.io/projected/ce22ccd7-e053-4795-bf35-e1021cfeff9d-kube-api-access-2gdsh\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.072824 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.072891 4874 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.073939 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "de75d382-99ba-4a94-8ab6-036d9fa19281" (UID: "de75d382-99ba-4a94-8ab6-036d9fa19281"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.074109 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-scripts" (OuterVolumeSpecName: "scripts") pod "de75d382-99ba-4a94-8ab6-036d9fa19281" (UID: "de75d382-99ba-4a94-8ab6-036d9fa19281"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.074951 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "de75d382-99ba-4a94-8ab6-036d9fa19281" (UID: "de75d382-99ba-4a94-8ab6-036d9fa19281"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.082231 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data" (OuterVolumeSpecName: "config-data") pod "ce22ccd7-e053-4795-bf35-e1021cfeff9d" (UID: "ce22ccd7-e053-4795-bf35-e1021cfeff9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.084657 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de75d382-99ba-4a94-8ab6-036d9fa19281-kube-api-access-snpmd" (OuterVolumeSpecName: "kube-api-access-snpmd") pod "de75d382-99ba-4a94-8ab6-036d9fa19281" (UID: "de75d382-99ba-4a94-8ab6-036d9fa19281"). InnerVolumeSpecName "kube-api-access-snpmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.122791 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "de75d382-99ba-4a94-8ab6-036d9fa19281" (UID: "de75d382-99ba-4a94-8ab6-036d9fa19281"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.172745 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de75d382-99ba-4a94-8ab6-036d9fa19281" (UID: "de75d382-99ba-4a94-8ab6-036d9fa19281"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.174790 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.174824 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.174838 4874 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.174849 4874 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de75d382-99ba-4a94-8ab6-036d9fa19281-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.174862 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snpmd\" (UniqueName: \"kubernetes.io/projected/de75d382-99ba-4a94-8ab6-036d9fa19281-kube-api-access-snpmd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.174873 4874 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.174888 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce22ccd7-e053-4795-bf35-e1021cfeff9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.227213 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-config-data" (OuterVolumeSpecName: "config-data") pod "de75d382-99ba-4a94-8ab6-036d9fa19281" (UID: "de75d382-99ba-4a94-8ab6-036d9fa19281"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.277389 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de75d382-99ba-4a94-8ab6-036d9fa19281-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.738733 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77965974bf-qbtfj" event={"ID":"ce22ccd7-e053-4795-bf35-e1021cfeff9d","Type":"ContainerDied","Data":"df2d5f788694121fec6ef1421d7e574b94691296f648d1bffae0f910d6c9700c"} Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.738798 4874 scope.go:117] "RemoveContainer" containerID="daf74c42e1fccb33337d34de03dc06c9d719522352425f3599d462b1db7b5a78" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.738966 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77965974bf-qbtfj" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.746428 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.748548 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de75d382-99ba-4a94-8ab6-036d9fa19281","Type":"ContainerDied","Data":"17c01737a97a23d3da4f5e39fdaf5c9f416cd59521eacfcb5c1f14bd8eb4c76f"} Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.771856 4874 scope.go:117] "RemoveContainer" containerID="3ac6821f489386cf3c59e1a3d28a18ae8a9b9407ebc27f9f3e4246df3ca3f1f0" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.795965 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.805408 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.809835 4874 scope.go:117] "RemoveContainer" containerID="9b3bcd14d71c8cea56a1be6015dfa14a18fc68d9695f004bb8a223cd908e9c46" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.815254 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-77965974bf-qbtfj"] Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.826340 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-77965974bf-qbtfj"] Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.833758 4874 scope.go:117] "RemoveContainer" containerID="280c70b4d8422938f6652a20e417cc362bbf94fe7d9feedd7e433ca2ff98917e" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850022 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:09 crc kubenswrapper[4874]: E0217 16:26:09.850540 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="ceilometer-central-agent" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850556 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="ceilometer-central-agent" Feb 17 16:26:09 crc kubenswrapper[4874]: E0217 16:26:09.850585 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="sg-core" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850592 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="sg-core" Feb 17 16:26:09 crc kubenswrapper[4874]: E0217 16:26:09.850604 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce22ccd7-e053-4795-bf35-e1021cfeff9d" containerName="heat-engine" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850610 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce22ccd7-e053-4795-bf35-e1021cfeff9d" containerName="heat-engine" Feb 17 16:26:09 crc kubenswrapper[4874]: E0217 16:26:09.850631 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="proxy-httpd" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850637 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="proxy-httpd" Feb 17 16:26:09 crc kubenswrapper[4874]: E0217 16:26:09.850652 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="ceilometer-notification-agent" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850658 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="ceilometer-notification-agent" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850859 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce22ccd7-e053-4795-bf35-e1021cfeff9d" containerName="heat-engine" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850875 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="proxy-httpd" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850888 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="sg-core" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850904 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="ceilometer-central-agent" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.850918 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" containerName="ceilometer-notification-agent" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.852855 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.853454 4874 scope.go:117] "RemoveContainer" containerID="b6ae2fb0bf5c7138f685b51ac6ee559d651c3c8fba3a7ce0f6a0fa916e7081fb" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.855509 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.855678 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.867846 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.996783 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4vc\" (UniqueName: \"kubernetes.io/projected/161ae496-353b-44b9-b228-febefa07e67f-kube-api-access-kq4vc\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.996847 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-run-httpd\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.996947 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-config-data\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.997031 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.997053 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-log-httpd\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.997132 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-scripts\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:09 crc kubenswrapper[4874]: I0217 16:26:09.997160 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.099008 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.099051 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-log-httpd\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.099127 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-scripts\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.099152 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.099221 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4vc\" (UniqueName: \"kubernetes.io/projected/161ae496-353b-44b9-b228-febefa07e67f-kube-api-access-kq4vc\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.099251 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-run-httpd\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.099298 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-config-data\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.100061 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-run-httpd\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.100317 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-log-httpd\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.105765 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-config-data\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.108481 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.108992 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.117949 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-scripts\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.120234 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4vc\" (UniqueName: \"kubernetes.io/projected/161ae496-353b-44b9-b228-febefa07e67f-kube-api-access-kq4vc\") pod \"ceilometer-0\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.173582 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.473460 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce22ccd7-e053-4795-bf35-e1021cfeff9d" path="/var/lib/kubelet/pods/ce22ccd7-e053-4795-bf35-e1021cfeff9d/volumes" Feb 17 16:26:10 crc kubenswrapper[4874]: I0217 16:26:10.474012 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de75d382-99ba-4a94-8ab6-036d9fa19281" path="/var/lib/kubelet/pods/de75d382-99ba-4a94-8ab6-036d9fa19281/volumes" Feb 17 16:26:11 crc kubenswrapper[4874]: I0217 16:26:11.140606 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:11 crc kubenswrapper[4874]: I0217 16:26:11.806820 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerStarted","Data":"b8b378e6cd461907217a47618910247a6ef84598f4351ddafa4e5a58d3228c6f"} Feb 17 16:26:12 crc kubenswrapper[4874]: I0217 16:26:12.825095 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerStarted","Data":"9856cd7e07aa86502d5ecd445968211f7bad8c3e3c4debc17748ddb307a8652f"} Feb 17 16:26:13 crc kubenswrapper[4874]: I0217 16:26:13.838756 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerStarted","Data":"fc807cb60829a555e050af1312e1124b0df7db14a4fa3470a4a1a827c343f7cc"} Feb 17 16:26:13 crc kubenswrapper[4874]: I0217 16:26:13.839113 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerStarted","Data":"678a2bfc6c26cdd8f203ed35745cc70e81b957e085f3b983d2431d7693eb0406"} Feb 17 16:26:15 crc kubenswrapper[4874]: I0217 16:26:15.621132 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:15 crc kubenswrapper[4874]: I0217 16:26:15.861880 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerStarted","Data":"63bf6ad09bcd1ea6065f62619bb59ff73d73539980c9ce69e5a6d06a8612ec59"} Feb 17 16:26:15 crc kubenswrapper[4874]: I0217 16:26:15.862278 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4874]: I0217 16:26:15.890237 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9889849440000003 podStartE2EDuration="6.890220702s" podCreationTimestamp="2026-02-17 16:26:09 +0000 UTC" firstStartedPulling="2026-02-17 16:26:11.152043823 +0000 UTC m=+1381.446432384" lastFinishedPulling="2026-02-17 16:26:15.053279581 +0000 UTC m=+1385.347668142" observedRunningTime="2026-02-17 16:26:15.884867672 +0000 UTC m=+1386.179256243" watchObservedRunningTime="2026-02-17 16:26:15.890220702 +0000 UTC m=+1386.184609263" Feb 17 16:26:16 crc kubenswrapper[4874]: I0217 16:26:16.872007 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="ceilometer-central-agent" containerID="cri-o://9856cd7e07aa86502d5ecd445968211f7bad8c3e3c4debc17748ddb307a8652f" gracePeriod=30 Feb 17 16:26:16 crc kubenswrapper[4874]: I0217 16:26:16.872108 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="ceilometer-notification-agent" containerID="cri-o://fc807cb60829a555e050af1312e1124b0df7db14a4fa3470a4a1a827c343f7cc" gracePeriod=30 Feb 17 16:26:16 crc kubenswrapper[4874]: I0217 16:26:16.872108 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="sg-core" containerID="cri-o://678a2bfc6c26cdd8f203ed35745cc70e81b957e085f3b983d2431d7693eb0406" gracePeriod=30 Feb 17 16:26:16 crc kubenswrapper[4874]: I0217 16:26:16.872152 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="proxy-httpd" containerID="cri-o://63bf6ad09bcd1ea6065f62619bb59ff73d73539980c9ce69e5a6d06a8612ec59" gracePeriod=30 Feb 17 16:26:17 crc kubenswrapper[4874]: I0217 16:26:17.885668 4874 generic.go:334] "Generic (PLEG): container finished" podID="161ae496-353b-44b9-b228-febefa07e67f" containerID="63bf6ad09bcd1ea6065f62619bb59ff73d73539980c9ce69e5a6d06a8612ec59" exitCode=0 Feb 17 16:26:17 crc kubenswrapper[4874]: I0217 16:26:17.885951 4874 generic.go:334] "Generic (PLEG): container finished" podID="161ae496-353b-44b9-b228-febefa07e67f" containerID="678a2bfc6c26cdd8f203ed35745cc70e81b957e085f3b983d2431d7693eb0406" exitCode=2 Feb 17 16:26:17 crc kubenswrapper[4874]: I0217 16:26:17.885962 4874 generic.go:334] "Generic (PLEG): container finished" podID="161ae496-353b-44b9-b228-febefa07e67f" containerID="fc807cb60829a555e050af1312e1124b0df7db14a4fa3470a4a1a827c343f7cc" exitCode=0 Feb 17 16:26:17 crc kubenswrapper[4874]: I0217 16:26:17.885764 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerDied","Data":"63bf6ad09bcd1ea6065f62619bb59ff73d73539980c9ce69e5a6d06a8612ec59"} Feb 17 16:26:17 crc kubenswrapper[4874]: I0217 16:26:17.886012 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerDied","Data":"678a2bfc6c26cdd8f203ed35745cc70e81b957e085f3b983d2431d7693eb0406"} Feb 17 16:26:17 crc kubenswrapper[4874]: I0217 16:26:17.886036 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerDied","Data":"fc807cb60829a555e050af1312e1124b0df7db14a4fa3470a4a1a827c343f7cc"} Feb 17 16:26:23 crc kubenswrapper[4874]: I0217 16:26:23.952150 4874 generic.go:334] "Generic (PLEG): container finished" podID="4327f121-2ddc-4367-9055-17c7fe4d855e" containerID="c631b19a947ff54ab119434427ae2f287f61f94f7785ee3ed680e0357a464f44" exitCode=0 Feb 17 16:26:23 crc kubenswrapper[4874]: I0217 16:26:23.952233 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ml2rb" event={"ID":"4327f121-2ddc-4367-9055-17c7fe4d855e","Type":"ContainerDied","Data":"c631b19a947ff54ab119434427ae2f287f61f94f7785ee3ed680e0357a464f44"} Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.437912 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.475188 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jfdm\" (UniqueName: \"kubernetes.io/projected/4327f121-2ddc-4367-9055-17c7fe4d855e-kube-api-access-6jfdm\") pod \"4327f121-2ddc-4367-9055-17c7fe4d855e\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.475560 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-scripts\") pod \"4327f121-2ddc-4367-9055-17c7fe4d855e\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.475631 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-combined-ca-bundle\") pod \"4327f121-2ddc-4367-9055-17c7fe4d855e\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.475696 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-config-data\") pod \"4327f121-2ddc-4367-9055-17c7fe4d855e\" (UID: \"4327f121-2ddc-4367-9055-17c7fe4d855e\") " Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.481218 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4327f121-2ddc-4367-9055-17c7fe4d855e-kube-api-access-6jfdm" (OuterVolumeSpecName: "kube-api-access-6jfdm") pod "4327f121-2ddc-4367-9055-17c7fe4d855e" (UID: "4327f121-2ddc-4367-9055-17c7fe4d855e"). InnerVolumeSpecName "kube-api-access-6jfdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.481660 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-scripts" (OuterVolumeSpecName: "scripts") pod "4327f121-2ddc-4367-9055-17c7fe4d855e" (UID: "4327f121-2ddc-4367-9055-17c7fe4d855e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.508431 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-config-data" (OuterVolumeSpecName: "config-data") pod "4327f121-2ddc-4367-9055-17c7fe4d855e" (UID: "4327f121-2ddc-4367-9055-17c7fe4d855e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.518455 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4327f121-2ddc-4367-9055-17c7fe4d855e" (UID: "4327f121-2ddc-4367-9055-17c7fe4d855e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.579556 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.579589 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.579601 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4327f121-2ddc-4367-9055-17c7fe4d855e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.579611 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jfdm\" (UniqueName: \"kubernetes.io/projected/4327f121-2ddc-4367-9055-17c7fe4d855e-kube-api-access-6jfdm\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.988207 4874 generic.go:334] "Generic (PLEG): container finished" podID="161ae496-353b-44b9-b228-febefa07e67f" containerID="9856cd7e07aa86502d5ecd445968211f7bad8c3e3c4debc17748ddb307a8652f" exitCode=0 Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.988301 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerDied","Data":"9856cd7e07aa86502d5ecd445968211f7bad8c3e3c4debc17748ddb307a8652f"} Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.992441 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ml2rb" event={"ID":"4327f121-2ddc-4367-9055-17c7fe4d855e","Type":"ContainerDied","Data":"4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3"} Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.992481 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3" Feb 17 16:26:25 crc kubenswrapper[4874]: I0217 16:26:25.992525 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ml2rb" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.017380 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.214807 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-log-httpd\") pod \"161ae496-353b-44b9-b228-febefa07e67f\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.214853 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-run-httpd\") pod \"161ae496-353b-44b9-b228-febefa07e67f\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.214907 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-config-data\") pod \"161ae496-353b-44b9-b228-febefa07e67f\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.214945 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-scripts\") pod \"161ae496-353b-44b9-b228-febefa07e67f\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.214968 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-sg-core-conf-yaml\") pod \"161ae496-353b-44b9-b228-febefa07e67f\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.215005 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq4vc\" (UniqueName: \"kubernetes.io/projected/161ae496-353b-44b9-b228-febefa07e67f-kube-api-access-kq4vc\") pod \"161ae496-353b-44b9-b228-febefa07e67f\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.215057 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-combined-ca-bundle\") pod \"161ae496-353b-44b9-b228-febefa07e67f\" (UID: \"161ae496-353b-44b9-b228-febefa07e67f\") " Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.215472 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "161ae496-353b-44b9-b228-febefa07e67f" (UID: "161ae496-353b-44b9-b228-febefa07e67f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.215494 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "161ae496-353b-44b9-b228-febefa07e67f" (UID: "161ae496-353b-44b9-b228-febefa07e67f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.216261 4874 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.216285 4874 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/161ae496-353b-44b9-b228-febefa07e67f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.219507 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161ae496-353b-44b9-b228-febefa07e67f-kube-api-access-kq4vc" (OuterVolumeSpecName: "kube-api-access-kq4vc") pod "161ae496-353b-44b9-b228-febefa07e67f" (UID: "161ae496-353b-44b9-b228-febefa07e67f"). InnerVolumeSpecName "kube-api-access-kq4vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.224062 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.228186 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-scripts" (OuterVolumeSpecName: "scripts") pod "161ae496-353b-44b9-b228-febefa07e67f" (UID: "161ae496-353b-44b9-b228-febefa07e67f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4874]: E0217 16:26:26.240390 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="ceilometer-notification-agent" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.240433 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="ceilometer-notification-agent" Feb 17 16:26:26 crc kubenswrapper[4874]: E0217 16:26:26.240463 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4327f121-2ddc-4367-9055-17c7fe4d855e" containerName="nova-cell0-conductor-db-sync" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.240471 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="4327f121-2ddc-4367-9055-17c7fe4d855e" containerName="nova-cell0-conductor-db-sync" Feb 17 16:26:26 crc kubenswrapper[4874]: E0217 16:26:26.240502 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="ceilometer-central-agent" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.240510 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="ceilometer-central-agent" Feb 17 16:26:26 crc kubenswrapper[4874]: E0217 16:26:26.240547 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="sg-core" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.240555 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="sg-core" Feb 17 16:26:26 crc kubenswrapper[4874]: E0217 16:26:26.240583 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="proxy-httpd" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.240590 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="proxy-httpd" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.241009 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="sg-core" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.241042 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="ceilometer-central-agent" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.241061 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="ceilometer-notification-agent" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.241088 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="4327f121-2ddc-4367-9055-17c7fe4d855e" containerName="nova-cell0-conductor-db-sync" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.241106 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="161ae496-353b-44b9-b228-febefa07e67f" containerName="proxy-httpd" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.242173 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.250331 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-t5tn4" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.250702 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.251908 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.284040 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "161ae496-353b-44b9-b228-febefa07e67f" (UID: "161ae496-353b-44b9-b228-febefa07e67f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.318638 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.319378 4874 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.319405 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq4vc\" (UniqueName: \"kubernetes.io/projected/161ae496-353b-44b9-b228-febefa07e67f-kube-api-access-kq4vc\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.340885 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "161ae496-353b-44b9-b228-febefa07e67f" (UID: "161ae496-353b-44b9-b228-febefa07e67f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.384183 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-config-data" (OuterVolumeSpecName: "config-data") pod "161ae496-353b-44b9-b228-febefa07e67f" (UID: "161ae496-353b-44b9-b228-febefa07e67f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.421454 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9261d2-3f0c-40dc-bd1f-07c6216ea317-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"de9261d2-3f0c-40dc-bd1f-07c6216ea317\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.421548 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9261d2-3f0c-40dc-bd1f-07c6216ea317-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"de9261d2-3f0c-40dc-bd1f-07c6216ea317\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.421748 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86mhl\" (UniqueName: \"kubernetes.io/projected/de9261d2-3f0c-40dc-bd1f-07c6216ea317-kube-api-access-86mhl\") pod \"nova-cell0-conductor-0\" (UID: \"de9261d2-3f0c-40dc-bd1f-07c6216ea317\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.421884 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.421909 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161ae496-353b-44b9-b228-febefa07e67f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.524012 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86mhl\" (UniqueName: \"kubernetes.io/projected/de9261d2-3f0c-40dc-bd1f-07c6216ea317-kube-api-access-86mhl\") pod \"nova-cell0-conductor-0\" (UID: \"de9261d2-3f0c-40dc-bd1f-07c6216ea317\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.525200 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9261d2-3f0c-40dc-bd1f-07c6216ea317-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"de9261d2-3f0c-40dc-bd1f-07c6216ea317\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.525282 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9261d2-3f0c-40dc-bd1f-07c6216ea317-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"de9261d2-3f0c-40dc-bd1f-07c6216ea317\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.531968 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de9261d2-3f0c-40dc-bd1f-07c6216ea317-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"de9261d2-3f0c-40dc-bd1f-07c6216ea317\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.532246 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9261d2-3f0c-40dc-bd1f-07c6216ea317-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"de9261d2-3f0c-40dc-bd1f-07c6216ea317\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.541707 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86mhl\" (UniqueName: \"kubernetes.io/projected/de9261d2-3f0c-40dc-bd1f-07c6216ea317-kube-api-access-86mhl\") pod \"nova-cell0-conductor-0\" (UID: \"de9261d2-3f0c-40dc-bd1f-07c6216ea317\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:26 crc kubenswrapper[4874]: I0217 16:26:26.561307 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.004153 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"161ae496-353b-44b9-b228-febefa07e67f","Type":"ContainerDied","Data":"b8b378e6cd461907217a47618910247a6ef84598f4351ddafa4e5a58d3228c6f"} Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.004203 4874 scope.go:117] "RemoveContainer" containerID="63bf6ad09bcd1ea6065f62619bb59ff73d73539980c9ce69e5a6d06a8612ec59" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.004206 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.032790 4874 scope.go:117] "RemoveContainer" containerID="678a2bfc6c26cdd8f203ed35745cc70e81b957e085f3b983d2431d7693eb0406" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.051517 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.054649 4874 scope.go:117] "RemoveContainer" containerID="fc807cb60829a555e050af1312e1124b0df7db14a4fa3470a4a1a827c343f7cc" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.086071 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.098187 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.105430 4874 scope.go:117] "RemoveContainer" containerID="9856cd7e07aa86502d5ecd445968211f7bad8c3e3c4debc17748ddb307a8652f" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.127226 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.129781 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.133811 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.134212 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.139518 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.241571 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.243057 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-scripts\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.243273 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-run-httpd\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.243310 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.243360 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-log-httpd\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.243408 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6b7\" (UniqueName: \"kubernetes.io/projected/671e28c7-fd06-4eae-9d3a-c7c6e8624590-kube-api-access-7v6b7\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.243608 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-config-data\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.346123 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6b7\" (UniqueName: \"kubernetes.io/projected/671e28c7-fd06-4eae-9d3a-c7c6e8624590-kube-api-access-7v6b7\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.346211 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-config-data\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.346264 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.346312 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-scripts\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.346418 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-run-httpd\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.346447 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.346493 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-log-httpd\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.347008 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-log-httpd\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.348128 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-run-httpd\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.352339 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-scripts\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.353259 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-config-data\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.353431 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.355602 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.364884 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6b7\" (UniqueName: \"kubernetes.io/projected/671e28c7-fd06-4eae-9d3a-c7c6e8624590-kube-api-access-7v6b7\") pod \"ceilometer-0\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.449790 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.724431 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:26:27 crc kubenswrapper[4874]: I0217 16:26:27.724683 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:26:28 crc kubenswrapper[4874]: I0217 16:26:28.029568 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"de9261d2-3f0c-40dc-bd1f-07c6216ea317","Type":"ContainerStarted","Data":"1dcc2cf84e10a94a91cc5192ff0ccbf396ec753b7862914e411e7797f50d7905"} Feb 17 16:26:28 crc kubenswrapper[4874]: I0217 16:26:28.029631 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"de9261d2-3f0c-40dc-bd1f-07c6216ea317","Type":"ContainerStarted","Data":"6238fd445c55262241ec1aa9a6dcce7d914d958531207d785d76e34d1930c39a"} Feb 17 16:26:28 crc kubenswrapper[4874]: I0217 16:26:28.031701 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:28 crc kubenswrapper[4874]: I0217 16:26:28.111759 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.111739164 podStartE2EDuration="2.111739164s" podCreationTimestamp="2026-02-17 16:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:28.050862627 +0000 UTC m=+1398.345251208" watchObservedRunningTime="2026-02-17 16:26:28.111739164 +0000 UTC m=+1398.406127725" Feb 17 16:26:28 crc kubenswrapper[4874]: I0217 16:26:28.129069 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:28 crc kubenswrapper[4874]: I0217 16:26:28.472579 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161ae496-353b-44b9-b228-febefa07e67f" path="/var/lib/kubelet/pods/161ae496-353b-44b9-b228-febefa07e67f/volumes" Feb 17 16:26:29 crc kubenswrapper[4874]: I0217 16:26:29.045595 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerStarted","Data":"83ca433d25659299bdce7976ea1c719a86fbd96d36ed344b50785d9166741bb5"} Feb 17 16:26:30 crc kubenswrapper[4874]: I0217 16:26:30.056639 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerStarted","Data":"aa59b39ce6d6c844a2b8a05bd4ddf5e948f6b421e69d90ec4e1c575aa1219f5d"} Feb 17 16:26:31 crc kubenswrapper[4874]: I0217 16:26:31.069349 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerStarted","Data":"57cb2f53058159d12d1cfcce7d1c676ff8c0b7f616f92c1894b3d26ca21f3676"} Feb 17 16:26:32 crc kubenswrapper[4874]: I0217 16:26:32.083300 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerStarted","Data":"b18a8f41331f61565e9bea7e65269d3784eb845674aef93e1ff2dd27681a8efd"} Feb 17 16:26:35 crc kubenswrapper[4874]: I0217 16:26:35.114344 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerStarted","Data":"ce2e9f72a9bdc6a1fed9d1bc887d649df2b12ff2bda4e8808e73c6f53e9187ca"} Feb 17 16:26:35 crc kubenswrapper[4874]: I0217 16:26:35.116361 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:26:35 crc kubenswrapper[4874]: I0217 16:26:35.140279 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.126735386 podStartE2EDuration="8.140262626s" podCreationTimestamp="2026-02-17 16:26:27 +0000 UTC" firstStartedPulling="2026-02-17 16:26:28.121925163 +0000 UTC m=+1398.416313724" lastFinishedPulling="2026-02-17 16:26:34.135452403 +0000 UTC m=+1404.429840964" observedRunningTime="2026-02-17 16:26:35.136233697 +0000 UTC m=+1405.430622258" watchObservedRunningTime="2026-02-17 16:26:35.140262626 +0000 UTC m=+1405.434651187" Feb 17 16:26:36 crc kubenswrapper[4874]: I0217 16:26:36.592638 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.351593 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-78mdm"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.353317 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.355298 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.355508 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.362889 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-78mdm"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.518259 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-scripts\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.518542 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-config-data\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.518773 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.518960 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c26w\" (UniqueName: \"kubernetes.io/projected/74d95d6d-ef3c-4154-a40d-5bee661b7d56-kube-api-access-6c26w\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.537315 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.539243 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.544670 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.554607 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.619660 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.621780 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.628680 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-config-data\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.629070 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.653761 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c26w\" (UniqueName: \"kubernetes.io/projected/74d95d6d-ef3c-4154-a40d-5bee661b7d56-kube-api-access-6c26w\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.654104 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-scripts\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.646827 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.629982 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.647868 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.662424 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-scripts\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.676367 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-config-data\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.684773 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c26w\" (UniqueName: \"kubernetes.io/projected/74d95d6d-ef3c-4154-a40d-5bee661b7d56-kube-api-access-6c26w\") pod \"nova-cell0-cell-mapping-78mdm\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.765162 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz8lf\" (UniqueName: \"kubernetes.io/projected/f40c4b93-cd36-443a-b3c2-b2afa825606b-kube-api-access-gz8lf\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.765236 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40c4b93-cd36-443a-b3c2-b2afa825606b-logs\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.765265 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-config-data\") pod \"nova-scheduler-0\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.765307 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.765405 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-config-data\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.765505 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k759\" (UniqueName: \"kubernetes.io/projected/3b3d858a-3158-4d4b-81d3-ef898bb8695f-kube-api-access-6k759\") pod \"nova-scheduler-0\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.765532 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.787516 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pd7kk"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.790228 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.800635 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.805408 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.808668 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.830501 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pd7kk"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.845662 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.865595 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.867200 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870133 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k759\" (UniqueName: \"kubernetes.io/projected/3b3d858a-3158-4d4b-81d3-ef898bb8695f-kube-api-access-6k759\") pod \"nova-scheduler-0\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870163 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870224 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870241 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870260 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870301 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e595bf-badf-492a-beb3-10d3bc5562b9-logs\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870346 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-config\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870369 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz8lf\" (UniqueName: \"kubernetes.io/projected/f40c4b93-cd36-443a-b3c2-b2afa825606b-kube-api-access-gz8lf\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870403 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bbcm\" (UniqueName: \"kubernetes.io/projected/d4e595bf-badf-492a-beb3-10d3bc5562b9-kube-api-access-4bbcm\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870427 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40c4b93-cd36-443a-b3c2-b2afa825606b-logs\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870453 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-config-data\") pod \"nova-scheduler-0\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870480 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870520 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gcnw\" (UniqueName: \"kubernetes.io/projected/f3e465d4-50df-419e-b724-3e6b957613e5-kube-api-access-8gcnw\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870541 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.870546 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.872201 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40c4b93-cd36-443a-b3c2-b2afa825606b-logs\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.872959 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.873022 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-config-data\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.873103 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-config-data\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.876042 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.878523 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.878626 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-config-data\") pod \"nova-scheduler-0\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.881898 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-config-data\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.887221 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.901865 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz8lf\" (UniqueName: \"kubernetes.io/projected/f40c4b93-cd36-443a-b3c2-b2afa825606b-kube-api-access-gz8lf\") pod \"nova-api-0\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " pod="openstack/nova-api-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.904840 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k759\" (UniqueName: \"kubernetes.io/projected/3b3d858a-3158-4d4b-81d3-ef898bb8695f-kube-api-access-6k759\") pod \"nova-scheduler-0\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.974924 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.974966 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975011 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975028 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975050 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975109 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e595bf-badf-492a-beb3-10d3bc5562b9-logs\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975157 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-config\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975191 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bbcm\" (UniqueName: \"kubernetes.io/projected/d4e595bf-badf-492a-beb3-10d3bc5562b9-kube-api-access-4bbcm\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975425 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gcnw\" (UniqueName: \"kubernetes.io/projected/f3e465d4-50df-419e-b724-3e6b957613e5-kube-api-access-8gcnw\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975448 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975465 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975487 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx64c\" (UniqueName: \"kubernetes.io/projected/d3ed2c18-8df0-435d-a3b1-056be5a94c20-kube-api-access-wx64c\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.975540 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-config-data\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.977766 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e595bf-badf-492a-beb3-10d3bc5562b9-logs\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.978430 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.979011 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.979558 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.979636 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-config\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.980225 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.981041 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.982751 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-config-data\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.984798 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.998722 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bbcm\" (UniqueName: \"kubernetes.io/projected/d4e595bf-badf-492a-beb3-10d3bc5562b9-kube-api-access-4bbcm\") pod \"nova-metadata-0\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " pod="openstack/nova-metadata-0" Feb 17 16:26:37 crc kubenswrapper[4874]: I0217 16:26:37.999873 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gcnw\" (UniqueName: \"kubernetes.io/projected/f3e465d4-50df-419e-b724-3e6b957613e5-kube-api-access-8gcnw\") pod \"dnsmasq-dns-568d7fd7cf-pd7kk\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.069755 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.077756 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.078171 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.078534 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx64c\" (UniqueName: \"kubernetes.io/projected/d3ed2c18-8df0-435d-a3b1-056be5a94c20-kube-api-access-wx64c\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.083732 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.085747 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.104664 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx64c\" (UniqueName: \"kubernetes.io/projected/d3ed2c18-8df0-435d-a3b1-056be5a94c20-kube-api-access-wx64c\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.144972 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.169418 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:38 crc kubenswrapper[4874]: I0217 16:26:38.171380 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:38.358288 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.241280 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lnpsd"] Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.243752 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.246284 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.246413 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.254862 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lnpsd"] Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.430222 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-scripts\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.430483 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2rk5\" (UniqueName: \"kubernetes.io/projected/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-kube-api-access-f2rk5\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.430594 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.430918 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-config-data\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.536044 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-scripts\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.536299 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2rk5\" (UniqueName: \"kubernetes.io/projected/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-kube-api-access-f2rk5\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.536367 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.536466 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-config-data\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.542482 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-config-data\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.550174 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-scripts\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.554669 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.559775 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2rk5\" (UniqueName: \"kubernetes.io/projected/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-kube-api-access-f2rk5\") pod \"nova-cell1-conductor-db-sync-lnpsd\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.567220 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:39 crc kubenswrapper[4874]: I0217 16:26:39.936806 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-78mdm"] Feb 17 16:26:40 crc kubenswrapper[4874]: I0217 16:26:40.197246 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-78mdm" event={"ID":"74d95d6d-ef3c-4154-a40d-5bee661b7d56","Type":"ContainerStarted","Data":"b0ab965ed1978d1418a7656d4083749c2323934c4220055530cc29c26671284b"} Feb 17 16:26:40 crc kubenswrapper[4874]: I0217 16:26:40.513384 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:40 crc kubenswrapper[4874]: I0217 16:26:40.513663 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:26:40 crc kubenswrapper[4874]: I0217 16:26:40.513674 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:40 crc kubenswrapper[4874]: I0217 16:26:40.521842 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pd7kk"] Feb 17 16:26:40 crc kubenswrapper[4874]: I0217 16:26:40.538436 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:40 crc kubenswrapper[4874]: I0217 16:26:40.685991 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lnpsd"] Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.228348 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f40c4b93-cd36-443a-b3c2-b2afa825606b","Type":"ContainerStarted","Data":"683f05c5698ab7c05f4fe015315c8389647932fa9f0c9c5df09fc4b97b9af51a"} Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.233421 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lnpsd" event={"ID":"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9","Type":"ContainerStarted","Data":"4febb0c9c517a213914dfb27dd0c6bc087f3a254c5aeb2ed1ffcf741ab199284"} Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.233465 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lnpsd" event={"ID":"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9","Type":"ContainerStarted","Data":"e0d48b903cbfdf332bced3f5438f1f9fa287781d648620b9b7515daae988b91e"} Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.256364 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4e595bf-badf-492a-beb3-10d3bc5562b9","Type":"ContainerStarted","Data":"8e62a7bde7c0aa9fd16b8ef00e6499e90d5bfe4362efa66c62fc55e377e10a68"} Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.260054 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lnpsd" podStartSLOduration=2.260036571 podStartE2EDuration="2.260036571s" podCreationTimestamp="2026-02-17 16:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:41.246591053 +0000 UTC m=+1411.540979614" watchObservedRunningTime="2026-02-17 16:26:41.260036571 +0000 UTC m=+1411.554425132" Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.262386 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3b3d858a-3158-4d4b-81d3-ef898bb8695f","Type":"ContainerStarted","Data":"661150ada7d0b9434be7f12dede2c1db7b7154e54353b3976d76c16f1ab230fd"} Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.266724 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-78mdm" event={"ID":"74d95d6d-ef3c-4154-a40d-5bee661b7d56","Type":"ContainerStarted","Data":"bdb53e0a5adb7c4624709ae418e3349df97de66d9507cf6ea08e45046bb785e0"} Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.268926 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3ed2c18-8df0-435d-a3b1-056be5a94c20","Type":"ContainerStarted","Data":"ce600c6c2334249eacb239c1004b41ca4fb4534da85a1d84c487aa256419ae68"} Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.272555 4874 generic.go:334] "Generic (PLEG): container finished" podID="f3e465d4-50df-419e-b724-3e6b957613e5" containerID="98e8ee24c36aa88aac7301c5560bf90638af7061793244e04fc5395ccb1fa82d" exitCode=0 Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.272598 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" event={"ID":"f3e465d4-50df-419e-b724-3e6b957613e5","Type":"ContainerDied","Data":"98e8ee24c36aa88aac7301c5560bf90638af7061793244e04fc5395ccb1fa82d"} Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.272622 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" event={"ID":"f3e465d4-50df-419e-b724-3e6b957613e5","Type":"ContainerStarted","Data":"01dd8cdac32bc87cfc23d3f814fa51e027d542d9691aeeceb30a83bc556cd509"} Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.295209 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-78mdm" podStartSLOduration=4.29519343 podStartE2EDuration="4.29519343s" podCreationTimestamp="2026-02-17 16:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:41.289985663 +0000 UTC m=+1411.584374224" watchObservedRunningTime="2026-02-17 16:26:41.29519343 +0000 UTC m=+1411.589581981" Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.646387 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:41 crc kubenswrapper[4874]: I0217 16:26:41.678562 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:26:42 crc kubenswrapper[4874]: I0217 16:26:42.309888 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" event={"ID":"f3e465d4-50df-419e-b724-3e6b957613e5","Type":"ContainerStarted","Data":"073a06bf9d5eee431c2516dc49a8fcde6070a48fe43b4707401407d6c95cd9cf"} Feb 17 16:26:42 crc kubenswrapper[4874]: W0217 16:26:42.336307 4874 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7283b1_4828_4a90_bdd2_6861b7d6475b.slice/crio-4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c": error while statting cgroup v2: [unable to parse /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7283b1_4828_4a90_bdd2_6861b7d6475b.slice/crio-4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c/memory.stat: read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7283b1_4828_4a90_bdd2_6861b7d6475b.slice/crio-4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c/memory.stat: no such device], continuing to push stats Feb 17 16:26:42 crc kubenswrapper[4874]: W0217 16:26:42.340543 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161ae496_353b_44b9_b228_febefa07e67f.slice/crio-9856cd7e07aa86502d5ecd445968211f7bad8c3e3c4debc17748ddb307a8652f.scope WatchSource:0}: Error finding container 9856cd7e07aa86502d5ecd445968211f7bad8c3e3c4debc17748ddb307a8652f: Status 404 returned error can't find the container with id 9856cd7e07aa86502d5ecd445968211f7bad8c3e3c4debc17748ddb307a8652f Feb 17 16:26:42 crc kubenswrapper[4874]: I0217 16:26:42.344402 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" podStartSLOduration=5.344387077 podStartE2EDuration="5.344387077s" podCreationTimestamp="2026-02-17 16:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:42.337370316 +0000 UTC m=+1412.631758887" watchObservedRunningTime="2026-02-17 16:26:42.344387077 +0000 UTC m=+1412.638775628" Feb 17 16:26:42 crc kubenswrapper[4874]: W0217 16:26:42.353573 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161ae496_353b_44b9_b228_febefa07e67f.slice/crio-fc807cb60829a555e050af1312e1124b0df7db14a4fa3470a4a1a827c343f7cc.scope WatchSource:0}: Error finding container fc807cb60829a555e050af1312e1124b0df7db14a4fa3470a4a1a827c343f7cc: Status 404 returned error can't find the container with id fc807cb60829a555e050af1312e1124b0df7db14a4fa3470a4a1a827c343f7cc Feb 17 16:26:42 crc kubenswrapper[4874]: W0217 16:26:42.360378 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161ae496_353b_44b9_b228_febefa07e67f.slice/crio-678a2bfc6c26cdd8f203ed35745cc70e81b957e085f3b983d2431d7693eb0406.scope WatchSource:0}: Error finding container 678a2bfc6c26cdd8f203ed35745cc70e81b957e085f3b983d2431d7693eb0406: Status 404 returned error can't find the container with id 678a2bfc6c26cdd8f203ed35745cc70e81b957e085f3b983d2431d7693eb0406 Feb 17 16:26:42 crc kubenswrapper[4874]: W0217 16:26:42.362194 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161ae496_353b_44b9_b228_febefa07e67f.slice/crio-63bf6ad09bcd1ea6065f62619bb59ff73d73539980c9ce69e5a6d06a8612ec59.scope WatchSource:0}: Error finding container 63bf6ad09bcd1ea6065f62619bb59ff73d73539980c9ce69e5a6d06a8612ec59: Status 404 returned error can't find the container with id 63bf6ad09bcd1ea6065f62619bb59ff73d73539980c9ce69e5a6d06a8612ec59 Feb 17 16:26:42 crc kubenswrapper[4874]: E0217 16:26:42.531806 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-conmon-c631b19a947ff54ab119434427ae2f287f61f94f7785ee3ed680e0357a464f44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-c631b19a947ff54ab119434427ae2f287f61f94f7785ee3ed680e0357a464f44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7283b1_4828_4a90_bdd2_6861b7d6475b.slice/crio-4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161ae496_353b_44b9_b228_febefa07e67f.slice/crio-b8b378e6cd461907217a47618910247a6ef84598f4351ddafa4e5a58d3228c6f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:26:42 crc kubenswrapper[4874]: E0217 16:26:42.531864 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7283b1_4828_4a90_bdd2_6861b7d6475b.slice/crio-4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-conmon-c631b19a947ff54ab119434427ae2f287f61f94f7785ee3ed680e0357a464f44.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:26:43 crc kubenswrapper[4874]: I0217 16:26:43.145829 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:43 crc kubenswrapper[4874]: I0217 16:26:43.327528 4874 generic.go:334] "Generic (PLEG): container finished" podID="fb7283b1-4828-4a90-bdd2-6861b7d6475b" containerID="a012ef2d85a425cd08b332f4ed4e1a9bad275e69a3962cce70123a76ed8faf78" exitCode=137 Feb 17 16:26:43 crc kubenswrapper[4874]: I0217 16:26:43.328577 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78b7864799-6ls5l" event={"ID":"fb7283b1-4828-4a90-bdd2-6861b7d6475b","Type":"ContainerDied","Data":"a012ef2d85a425cd08b332f4ed4e1a9bad275e69a3962cce70123a76ed8faf78"} Feb 17 16:26:43 crc kubenswrapper[4874]: E0217 16:26:43.376534 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161ae496_353b_44b9_b228_febefa07e67f.slice/crio-b8b378e6cd461907217a47618910247a6ef84598f4351ddafa4e5a58d3228c6f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7283b1_4828_4a90_bdd2_6861b7d6475b.slice/crio-4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99a67b9d_37fa_411f_bfbe_321623f5d8fb.slice/crio-05b93c917baeaf4523227abb92e498aa5096e13d9849b9787dcee2dbd114be23\": RecentStats: unable to find data in memory cache]" Feb 17 16:26:43 crc kubenswrapper[4874]: E0217 16:26:43.567964 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7283b1_4828_4a90_bdd2_6861b7d6475b.slice/crio-4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c\": RecentStats: unable to find data in memory cache]" Feb 17 16:26:43 crc kubenswrapper[4874]: I0217 16:26:43.721860 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-698669dc7f-2q88l" podUID="99a67b9d-37fa-411f-bfbe-321623f5d8fb" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.214:8000/healthcheck\": dial tcp 10.217.0.214:8000: connect: connection refused" Feb 17 16:26:43 crc kubenswrapper[4874]: I0217 16:26:43.792392 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:43 crc kubenswrapper[4874]: I0217 16:26:43.792664 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="ceilometer-central-agent" containerID="cri-o://aa59b39ce6d6c844a2b8a05bd4ddf5e948f6b421e69d90ec4e1c575aa1219f5d" gracePeriod=30 Feb 17 16:26:43 crc kubenswrapper[4874]: I0217 16:26:43.792754 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="sg-core" containerID="cri-o://b18a8f41331f61565e9bea7e65269d3784eb845674aef93e1ff2dd27681a8efd" gracePeriod=30 Feb 17 16:26:43 crc kubenswrapper[4874]: I0217 16:26:43.792839 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="ceilometer-notification-agent" containerID="cri-o://57cb2f53058159d12d1cfcce7d1c676ff8c0b7f616f92c1894b3d26ca21f3676" gracePeriod=30 Feb 17 16:26:43 crc kubenswrapper[4874]: I0217 16:26:43.792757 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="proxy-httpd" containerID="cri-o://ce2e9f72a9bdc6a1fed9d1bc887d649df2b12ff2bda4e8808e73c6f53e9187ca" gracePeriod=30 Feb 17 16:26:44 crc kubenswrapper[4874]: I0217 16:26:44.342752 4874 generic.go:334] "Generic (PLEG): container finished" podID="99a67b9d-37fa-411f-bfbe-321623f5d8fb" containerID="205a70de0672725bf7638520f4240e801449e97038002d0756b185dd39d41736" exitCode=137 Feb 17 16:26:44 crc kubenswrapper[4874]: I0217 16:26:44.342924 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-698669dc7f-2q88l" event={"ID":"99a67b9d-37fa-411f-bfbe-321623f5d8fb","Type":"ContainerDied","Data":"205a70de0672725bf7638520f4240e801449e97038002d0756b185dd39d41736"} Feb 17 16:26:44 crc kubenswrapper[4874]: I0217 16:26:44.346385 4874 generic.go:334] "Generic (PLEG): container finished" podID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerID="ce2e9f72a9bdc6a1fed9d1bc887d649df2b12ff2bda4e8808e73c6f53e9187ca" exitCode=0 Feb 17 16:26:44 crc kubenswrapper[4874]: I0217 16:26:44.346416 4874 generic.go:334] "Generic (PLEG): container finished" podID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerID="b18a8f41331f61565e9bea7e65269d3784eb845674aef93e1ff2dd27681a8efd" exitCode=2 Feb 17 16:26:44 crc kubenswrapper[4874]: I0217 16:26:44.346425 4874 generic.go:334] "Generic (PLEG): container finished" podID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerID="aa59b39ce6d6c844a2b8a05bd4ddf5e948f6b421e69d90ec4e1c575aa1219f5d" exitCode=0 Feb 17 16:26:44 crc kubenswrapper[4874]: I0217 16:26:44.347569 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerDied","Data":"ce2e9f72a9bdc6a1fed9d1bc887d649df2b12ff2bda4e8808e73c6f53e9187ca"} Feb 17 16:26:44 crc kubenswrapper[4874]: I0217 16:26:44.347604 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerDied","Data":"b18a8f41331f61565e9bea7e65269d3784eb845674aef93e1ff2dd27681a8efd"} Feb 17 16:26:44 crc kubenswrapper[4874]: I0217 16:26:44.347617 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerDied","Data":"aa59b39ce6d6c844a2b8a05bd4ddf5e948f6b421e69d90ec4e1c575aa1219f5d"} Feb 17 16:26:44 crc kubenswrapper[4874]: I0217 16:26:44.884137 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.007913 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data-custom\") pod \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.007999 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-combined-ca-bundle\") pod \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.008220 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data\") pod \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.008319 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwhhd\" (UniqueName: \"kubernetes.io/projected/fb7283b1-4828-4a90-bdd2-6861b7d6475b-kube-api-access-fwhhd\") pod \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\" (UID: \"fb7283b1-4828-4a90-bdd2-6861b7d6475b\") " Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.015582 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb7283b1-4828-4a90-bdd2-6861b7d6475b-kube-api-access-fwhhd" (OuterVolumeSpecName: "kube-api-access-fwhhd") pod "fb7283b1-4828-4a90-bdd2-6861b7d6475b" (UID: "fb7283b1-4828-4a90-bdd2-6861b7d6475b"). InnerVolumeSpecName "kube-api-access-fwhhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.019768 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb7283b1-4828-4a90-bdd2-6861b7d6475b" (UID: "fb7283b1-4828-4a90-bdd2-6861b7d6475b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.074378 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb7283b1-4828-4a90-bdd2-6861b7d6475b" (UID: "fb7283b1-4828-4a90-bdd2-6861b7d6475b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.111277 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwhhd\" (UniqueName: \"kubernetes.io/projected/fb7283b1-4828-4a90-bdd2-6861b7d6475b-kube-api-access-fwhhd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.111312 4874 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.111322 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.158316 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data" (OuterVolumeSpecName: "config-data") pod "fb7283b1-4828-4a90-bdd2-6861b7d6475b" (UID: "fb7283b1-4828-4a90-bdd2-6861b7d6475b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.205977 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.213837 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7283b1-4828-4a90-bdd2-6861b7d6475b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.314882 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xbdq\" (UniqueName: \"kubernetes.io/projected/99a67b9d-37fa-411f-bfbe-321623f5d8fb-kube-api-access-6xbdq\") pod \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.315287 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-combined-ca-bundle\") pod \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.315494 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data\") pod \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.315583 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data-custom\") pod \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\" (UID: \"99a67b9d-37fa-411f-bfbe-321623f5d8fb\") " Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.388842 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-698669dc7f-2q88l" event={"ID":"99a67b9d-37fa-411f-bfbe-321623f5d8fb","Type":"ContainerDied","Data":"05b93c917baeaf4523227abb92e498aa5096e13d9849b9787dcee2dbd114be23"} Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.388898 4874 scope.go:117] "RemoveContainer" containerID="205a70de0672725bf7638520f4240e801449e97038002d0756b185dd39d41736" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.389011 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-698669dc7f-2q88l" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.395578 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78b7864799-6ls5l" event={"ID":"fb7283b1-4828-4a90-bdd2-6861b7d6475b","Type":"ContainerDied","Data":"4c64f40ace814a1fae214d9fcdd51ae917e3756400e709f08eabd6305d732a6c"} Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.395608 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78b7864799-6ls5l" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.406770 4874 generic.go:334] "Generic (PLEG): container finished" podID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerID="57cb2f53058159d12d1cfcce7d1c676ff8c0b7f616f92c1894b3d26ca21f3676" exitCode=0 Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.406815 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerDied","Data":"57cb2f53058159d12d1cfcce7d1c676ff8c0b7f616f92c1894b3d26ca21f3676"} Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.429754 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "99a67b9d-37fa-411f-bfbe-321623f5d8fb" (UID: "99a67b9d-37fa-411f-bfbe-321623f5d8fb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.441118 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99a67b9d-37fa-411f-bfbe-321623f5d8fb-kube-api-access-6xbdq" (OuterVolumeSpecName: "kube-api-access-6xbdq") pod "99a67b9d-37fa-411f-bfbe-321623f5d8fb" (UID: "99a67b9d-37fa-411f-bfbe-321623f5d8fb"). InnerVolumeSpecName "kube-api-access-6xbdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.446577 4874 scope.go:117] "RemoveContainer" containerID="a012ef2d85a425cd08b332f4ed4e1a9bad275e69a3962cce70123a76ed8faf78" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.452638 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78b7864799-6ls5l"] Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.467036 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-78b7864799-6ls5l"] Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.475335 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99a67b9d-37fa-411f-bfbe-321623f5d8fb" (UID: "99a67b9d-37fa-411f-bfbe-321623f5d8fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.520691 4874 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.520727 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xbdq\" (UniqueName: \"kubernetes.io/projected/99a67b9d-37fa-411f-bfbe-321623f5d8fb-kube-api-access-6xbdq\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.520742 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.685211 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data" (OuterVolumeSpecName: "config-data") pod "99a67b9d-37fa-411f-bfbe-321623f5d8fb" (UID: "99a67b9d-37fa-411f-bfbe-321623f5d8fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.747619 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99a67b9d-37fa-411f-bfbe-321623f5d8fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.905050 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.919835 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-698669dc7f-2q88l"] Feb 17 16:26:45 crc kubenswrapper[4874]: I0217 16:26:45.945495 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-698669dc7f-2q88l"] Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.053260 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-config-data\") pod \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.053300 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v6b7\" (UniqueName: \"kubernetes.io/projected/671e28c7-fd06-4eae-9d3a-c7c6e8624590-kube-api-access-7v6b7\") pod \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.053412 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-run-httpd\") pod \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.053453 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-scripts\") pod \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.053489 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-combined-ca-bundle\") pod \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.053562 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-log-httpd\") pod \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.053592 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-sg-core-conf-yaml\") pod \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\" (UID: \"671e28c7-fd06-4eae-9d3a-c7c6e8624590\") " Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.053855 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "671e28c7-fd06-4eae-9d3a-c7c6e8624590" (UID: "671e28c7-fd06-4eae-9d3a-c7c6e8624590"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.054225 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "671e28c7-fd06-4eae-9d3a-c7c6e8624590" (UID: "671e28c7-fd06-4eae-9d3a-c7c6e8624590"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.054904 4874 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.054976 4874 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671e28c7-fd06-4eae-9d3a-c7c6e8624590-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.058418 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/671e28c7-fd06-4eae-9d3a-c7c6e8624590-kube-api-access-7v6b7" (OuterVolumeSpecName: "kube-api-access-7v6b7") pod "671e28c7-fd06-4eae-9d3a-c7c6e8624590" (UID: "671e28c7-fd06-4eae-9d3a-c7c6e8624590"). InnerVolumeSpecName "kube-api-access-7v6b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.063193 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-scripts" (OuterVolumeSpecName: "scripts") pod "671e28c7-fd06-4eae-9d3a-c7c6e8624590" (UID: "671e28c7-fd06-4eae-9d3a-c7c6e8624590"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.085200 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "671e28c7-fd06-4eae-9d3a-c7c6e8624590" (UID: "671e28c7-fd06-4eae-9d3a-c7c6e8624590"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.154196 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "671e28c7-fd06-4eae-9d3a-c7c6e8624590" (UID: "671e28c7-fd06-4eae-9d3a-c7c6e8624590"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.157586 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v6b7\" (UniqueName: \"kubernetes.io/projected/671e28c7-fd06-4eae-9d3a-c7c6e8624590-kube-api-access-7v6b7\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.157616 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.157627 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.157658 4874 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.202285 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-config-data" (OuterVolumeSpecName: "config-data") pod "671e28c7-fd06-4eae-9d3a-c7c6e8624590" (UID: "671e28c7-fd06-4eae-9d3a-c7c6e8624590"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.259808 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671e28c7-fd06-4eae-9d3a-c7c6e8624590-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.420730 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671e28c7-fd06-4eae-9d3a-c7c6e8624590","Type":"ContainerDied","Data":"83ca433d25659299bdce7976ea1c719a86fbd96d36ed344b50785d9166741bb5"} Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.420786 4874 scope.go:117] "RemoveContainer" containerID="ce2e9f72a9bdc6a1fed9d1bc887d649df2b12ff2bda4e8808e73c6f53e9187ca" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.420784 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.427129 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3ed2c18-8df0-435d-a3b1-056be5a94c20","Type":"ContainerStarted","Data":"3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa"} Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.427198 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d3ed2c18-8df0-435d-a3b1-056be5a94c20" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa" gracePeriod=30 Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.433566 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f40c4b93-cd36-443a-b3c2-b2afa825606b","Type":"ContainerStarted","Data":"dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e"} Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.433615 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f40c4b93-cd36-443a-b3c2-b2afa825606b","Type":"ContainerStarted","Data":"01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67"} Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.438106 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3b3d858a-3158-4d4b-81d3-ef898bb8695f","Type":"ContainerStarted","Data":"5b8265f3dfc4c05582c971c09dfe48cb8d55d16bb6da0dcc4e839f7c516a2d35"} Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.441856 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4e595bf-badf-492a-beb3-10d3bc5562b9","Type":"ContainerStarted","Data":"653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47"} Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.441902 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4e595bf-badf-492a-beb3-10d3bc5562b9","Type":"ContainerStarted","Data":"051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a"} Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.441986 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerName="nova-metadata-log" containerID="cri-o://051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a" gracePeriod=30 Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.442001 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerName="nova-metadata-metadata" containerID="cri-o://653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47" gracePeriod=30 Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.450006 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.212073906 podStartE2EDuration="9.449988287s" podCreationTimestamp="2026-02-17 16:26:37 +0000 UTC" firstStartedPulling="2026-02-17 16:26:40.494784641 +0000 UTC m=+1410.789173202" lastFinishedPulling="2026-02-17 16:26:44.732699022 +0000 UTC m=+1415.027087583" observedRunningTime="2026-02-17 16:26:46.447493246 +0000 UTC m=+1416.741881817" watchObservedRunningTime="2026-02-17 16:26:46.449988287 +0000 UTC m=+1416.744376848" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.450396 4874 scope.go:117] "RemoveContainer" containerID="b18a8f41331f61565e9bea7e65269d3784eb845674aef93e1ff2dd27681a8efd" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.479570 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.202256457 podStartE2EDuration="9.479546779s" podCreationTimestamp="2026-02-17 16:26:37 +0000 UTC" firstStartedPulling="2026-02-17 16:26:40.440891815 +0000 UTC m=+1410.735280376" lastFinishedPulling="2026-02-17 16:26:44.718182137 +0000 UTC m=+1415.012570698" observedRunningTime="2026-02-17 16:26:46.46117128 +0000 UTC m=+1416.755559861" watchObservedRunningTime="2026-02-17 16:26:46.479546779 +0000 UTC m=+1416.773935340" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.499819 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99a67b9d-37fa-411f-bfbe-321623f5d8fb" path="/var/lib/kubelet/pods/99a67b9d-37fa-411f-bfbe-321623f5d8fb/volumes" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.500873 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb7283b1-4828-4a90-bdd2-6861b7d6475b" path="/var/lib/kubelet/pods/fb7283b1-4828-4a90-bdd2-6861b7d6475b/volumes" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.501583 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.516362 4874 scope.go:117] "RemoveContainer" containerID="57cb2f53058159d12d1cfcce7d1c676ff8c0b7f616f92c1894b3d26ca21f3676" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.526826 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.539441 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.274148522 podStartE2EDuration="9.539419901s" podCreationTimestamp="2026-02-17 16:26:37 +0000 UTC" firstStartedPulling="2026-02-17 16:26:40.525940372 +0000 UTC m=+1410.820328933" lastFinishedPulling="2026-02-17 16:26:44.791211751 +0000 UTC m=+1415.085600312" observedRunningTime="2026-02-17 16:26:46.512508234 +0000 UTC m=+1416.806896795" watchObservedRunningTime="2026-02-17 16:26:46.539419901 +0000 UTC m=+1416.833808462" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.551990 4874 scope.go:117] "RemoveContainer" containerID="aa59b39ce6d6c844a2b8a05bd4ddf5e948f6b421e69d90ec4e1c575aa1219f5d" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.577297 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:46 crc kubenswrapper[4874]: E0217 16:26:46.577919 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="ceilometer-notification-agent" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.577959 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="ceilometer-notification-agent" Feb 17 16:26:46 crc kubenswrapper[4874]: E0217 16:26:46.578098 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a67b9d-37fa-411f-bfbe-321623f5d8fb" containerName="heat-cfnapi" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578110 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a67b9d-37fa-411f-bfbe-321623f5d8fb" containerName="heat-cfnapi" Feb 17 16:26:46 crc kubenswrapper[4874]: E0217 16:26:46.578138 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="ceilometer-central-agent" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578146 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="ceilometer-central-agent" Feb 17 16:26:46 crc kubenswrapper[4874]: E0217 16:26:46.578165 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7283b1-4828-4a90-bdd2-6861b7d6475b" containerName="heat-api" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578173 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7283b1-4828-4a90-bdd2-6861b7d6475b" containerName="heat-api" Feb 17 16:26:46 crc kubenswrapper[4874]: E0217 16:26:46.578195 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="sg-core" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578203 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="sg-core" Feb 17 16:26:46 crc kubenswrapper[4874]: E0217 16:26:46.578228 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="proxy-httpd" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578236 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="proxy-httpd" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578527 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="proxy-httpd" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578550 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="ceilometer-notification-agent" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578586 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="ceilometer-central-agent" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578596 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7283b1-4828-4a90-bdd2-6861b7d6475b" containerName="heat-api" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578608 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" containerName="sg-core" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.578616 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="99a67b9d-37fa-411f-bfbe-321623f5d8fb" containerName="heat-cfnapi" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.581754 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.592752 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.593427 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.603507 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.607878 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.282449565 podStartE2EDuration="9.607862733s" podCreationTimestamp="2026-02-17 16:26:37 +0000 UTC" firstStartedPulling="2026-02-17 16:26:40.490775403 +0000 UTC m=+1410.785163984" lastFinishedPulling="2026-02-17 16:26:44.816188591 +0000 UTC m=+1415.110577152" observedRunningTime="2026-02-17 16:26:46.534888331 +0000 UTC m=+1416.829276892" watchObservedRunningTime="2026-02-17 16:26:46.607862733 +0000 UTC m=+1416.902251304" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.674431 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzdqd\" (UniqueName: \"kubernetes.io/projected/fff55ae8-1688-4f99-859d-3497b3cf851f-kube-api-access-kzdqd\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.674501 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-config-data\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.674531 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.674564 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-log-httpd\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.674589 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-scripts\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.674722 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-run-httpd\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.674768 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.778380 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdqd\" (UniqueName: \"kubernetes.io/projected/fff55ae8-1688-4f99-859d-3497b3cf851f-kube-api-access-kzdqd\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.779224 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-config-data\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.779277 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.780028 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-log-httpd\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.795285 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-scripts\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.795690 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-run-httpd\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.795770 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.801512 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.801692 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-config-data\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.801918 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-run-httpd\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.802309 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-log-httpd\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.803361 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.806316 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzdqd\" (UniqueName: \"kubernetes.io/projected/fff55ae8-1688-4f99-859d-3497b3cf851f-kube-api-access-kzdqd\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.827994 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-scripts\") pod \"ceilometer-0\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " pod="openstack/ceilometer-0" Feb 17 16:26:46 crc kubenswrapper[4874]: I0217 16:26:46.996916 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.421421 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.465232 4874 generic.go:334] "Generic (PLEG): container finished" podID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerID="653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47" exitCode=0 Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.465259 4874 generic.go:334] "Generic (PLEG): container finished" podID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerID="051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a" exitCode=143 Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.465296 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.465307 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4e595bf-badf-492a-beb3-10d3bc5562b9","Type":"ContainerDied","Data":"653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47"} Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.465335 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4e595bf-badf-492a-beb3-10d3bc5562b9","Type":"ContainerDied","Data":"051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a"} Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.465347 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d4e595bf-badf-492a-beb3-10d3bc5562b9","Type":"ContainerDied","Data":"8e62a7bde7c0aa9fd16b8ef00e6499e90d5bfe4362efa66c62fc55e377e10a68"} Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.465368 4874 scope.go:117] "RemoveContainer" containerID="653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.489266 4874 scope.go:117] "RemoveContainer" containerID="051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.509036 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e595bf-badf-492a-beb3-10d3bc5562b9-logs\") pod \"d4e595bf-badf-492a-beb3-10d3bc5562b9\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.509095 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bbcm\" (UniqueName: \"kubernetes.io/projected/d4e595bf-badf-492a-beb3-10d3bc5562b9-kube-api-access-4bbcm\") pod \"d4e595bf-badf-492a-beb3-10d3bc5562b9\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.511326 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-combined-ca-bundle\") pod \"d4e595bf-badf-492a-beb3-10d3bc5562b9\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.511405 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-config-data\") pod \"d4e595bf-badf-492a-beb3-10d3bc5562b9\" (UID: \"d4e595bf-badf-492a-beb3-10d3bc5562b9\") " Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.511793 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4e595bf-badf-492a-beb3-10d3bc5562b9-logs" (OuterVolumeSpecName: "logs") pod "d4e595bf-badf-492a-beb3-10d3bc5562b9" (UID: "d4e595bf-badf-492a-beb3-10d3bc5562b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.512635 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e595bf-badf-492a-beb3-10d3bc5562b9-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.515651 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e595bf-badf-492a-beb3-10d3bc5562b9-kube-api-access-4bbcm" (OuterVolumeSpecName: "kube-api-access-4bbcm") pod "d4e595bf-badf-492a-beb3-10d3bc5562b9" (UID: "d4e595bf-badf-492a-beb3-10d3bc5562b9"). InnerVolumeSpecName "kube-api-access-4bbcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.525753 4874 scope.go:117] "RemoveContainer" containerID="653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47" Feb 17 16:26:47 crc kubenswrapper[4874]: E0217 16:26:47.526273 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47\": container with ID starting with 653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47 not found: ID does not exist" containerID="653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.526323 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47"} err="failed to get container status \"653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47\": rpc error: code = NotFound desc = could not find container \"653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47\": container with ID starting with 653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47 not found: ID does not exist" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.526343 4874 scope.go:117] "RemoveContainer" containerID="051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a" Feb 17 16:26:47 crc kubenswrapper[4874]: E0217 16:26:47.526593 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a\": container with ID starting with 051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a not found: ID does not exist" containerID="051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.526634 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a"} err="failed to get container status \"051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a\": rpc error: code = NotFound desc = could not find container \"051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a\": container with ID starting with 051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a not found: ID does not exist" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.526648 4874 scope.go:117] "RemoveContainer" containerID="653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.526916 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47"} err="failed to get container status \"653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47\": rpc error: code = NotFound desc = could not find container \"653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47\": container with ID starting with 653e2afec506e3b134a77709d4ee431fa27e987c86df33ca5d25d936bcc8cb47 not found: ID does not exist" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.526947 4874 scope.go:117] "RemoveContainer" containerID="051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.527233 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a"} err="failed to get container status \"051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a\": rpc error: code = NotFound desc = could not find container \"051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a\": container with ID starting with 051510110b6bb1fc7aff97b6783aa3a534b3ae217fb1c548f49af6df949fff6a not found: ID does not exist" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.554125 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:47 crc kubenswrapper[4874]: W0217 16:26:47.566748 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfff55ae8_1688_4f99_859d_3497b3cf851f.slice/crio-197b719d94eb90e86def2d816e1f77b6afd6d3e15f8ba0ada39341fcd565e03e WatchSource:0}: Error finding container 197b719d94eb90e86def2d816e1f77b6afd6d3e15f8ba0ada39341fcd565e03e: Status 404 returned error can't find the container with id 197b719d94eb90e86def2d816e1f77b6afd6d3e15f8ba0ada39341fcd565e03e Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.569947 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-config-data" (OuterVolumeSpecName: "config-data") pod "d4e595bf-badf-492a-beb3-10d3bc5562b9" (UID: "d4e595bf-badf-492a-beb3-10d3bc5562b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.573821 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4e595bf-badf-492a-beb3-10d3bc5562b9" (UID: "d4e595bf-badf-492a-beb3-10d3bc5562b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.614500 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bbcm\" (UniqueName: \"kubernetes.io/projected/d4e595bf-badf-492a-beb3-10d3bc5562b9-kube-api-access-4bbcm\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.614529 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.614540 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e595bf-badf-492a-beb3-10d3bc5562b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.798604 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.808743 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.822373 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:47 crc kubenswrapper[4874]: E0217 16:26:47.822959 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerName="nova-metadata-metadata" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.822984 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerName="nova-metadata-metadata" Feb 17 16:26:47 crc kubenswrapper[4874]: E0217 16:26:47.823006 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerName="nova-metadata-log" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.823030 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerName="nova-metadata-log" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.823303 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerName="nova-metadata-metadata" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.823322 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e595bf-badf-492a-beb3-10d3bc5562b9" containerName="nova-metadata-log" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.825936 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.830705 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.830798 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.844667 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.921134 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-config-data\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.921194 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.921268 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.921518 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv6pk\" (UniqueName: \"kubernetes.io/projected/d8c2c683-b91a-4212-8857-4650c6d78bd3-kube-api-access-lv6pk\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4874]: I0217 16:26:47.921574 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c683-b91a-4212-8857-4650c6d78bd3-logs\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.023737 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-config-data\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.023785 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.023825 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.023937 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv6pk\" (UniqueName: \"kubernetes.io/projected/d8c2c683-b91a-4212-8857-4650c6d78bd3-kube-api-access-lv6pk\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.023966 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c683-b91a-4212-8857-4650c6d78bd3-logs\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.024440 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c683-b91a-4212-8857-4650c6d78bd3-logs\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.028017 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-config-data\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.037707 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.037861 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.041015 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv6pk\" (UniqueName: \"kubernetes.io/projected/d8c2c683-b91a-4212-8857-4650c6d78bd3-kube-api-access-lv6pk\") pod \"nova-metadata-0\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.071356 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.071413 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.102301 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.149437 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.157417 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.172385 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.172461 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.244986 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-wjgkl"] Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.245224 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" podUID="d3283562-95fd-4595-932e-cf95b3bdd769" containerName="dnsmasq-dns" containerID="cri-o://03dfd50f100bf00d0cba04e3c2f0676d778b83aca8ce2b98420b133b2a336636" gracePeriod=10 Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.359184 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:48 crc kubenswrapper[4874]: E0217 16:26:48.403451 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3\": RecentStats: unable to find data in memory cache]" Feb 17 16:26:48 crc kubenswrapper[4874]: E0217 16:26:48.404127 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3\": RecentStats: unable to find data in memory cache]" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.445914 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" podUID="d3283562-95fd-4595-932e-cf95b3bdd769" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.212:5353: connect: connection refused" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.572669 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="671e28c7-fd06-4eae-9d3a-c7c6e8624590" path="/var/lib/kubelet/pods/671e28c7-fd06-4eae-9d3a-c7c6e8624590/volumes" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.573825 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e595bf-badf-492a-beb3-10d3bc5562b9" path="/var/lib/kubelet/pods/d4e595bf-badf-492a-beb3-10d3bc5562b9/volumes" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.583186 4874 generic.go:334] "Generic (PLEG): container finished" podID="d3283562-95fd-4595-932e-cf95b3bdd769" containerID="03dfd50f100bf00d0cba04e3c2f0676d778b83aca8ce2b98420b133b2a336636" exitCode=0 Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.583250 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" event={"ID":"d3283562-95fd-4595-932e-cf95b3bdd769","Type":"ContainerDied","Data":"03dfd50f100bf00d0cba04e3c2f0676d778b83aca8ce2b98420b133b2a336636"} Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.589167 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerStarted","Data":"f4b43c49a73976e75dee12a7473d76f28c3e6250fbda3df929483354029c5ffc"} Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.589239 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerStarted","Data":"197b719d94eb90e86def2d816e1f77b6afd6d3e15f8ba0ada39341fcd565e03e"} Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.721728 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.950623 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-2wzww"] Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.953111 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:48 crc kubenswrapper[4874]: I0217 16:26:48.981717 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-2wzww"] Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.054916 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-383e-account-create-update-f4p7m"] Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.057212 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.062435 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.066600 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-383e-account-create-update-f4p7m"] Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.102380 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc4fk\" (UniqueName: \"kubernetes.io/projected/181fc32a-cc08-4e8c-8f05-b532e505f0df-kube-api-access-gc4fk\") pod \"aodh-db-create-2wzww\" (UID: \"181fc32a-cc08-4e8c-8f05-b532e505f0df\") " pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.102471 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20df2a95-c9b4-4cee-95a5-9a7481aed963-operator-scripts\") pod \"aodh-383e-account-create-update-f4p7m\" (UID: \"20df2a95-c9b4-4cee-95a5-9a7481aed963\") " pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.102744 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181fc32a-cc08-4e8c-8f05-b532e505f0df-operator-scripts\") pod \"aodh-db-create-2wzww\" (UID: \"181fc32a-cc08-4e8c-8f05-b532e505f0df\") " pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.102810 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg54w\" (UniqueName: \"kubernetes.io/projected/20df2a95-c9b4-4cee-95a5-9a7481aed963-kube-api-access-fg54w\") pod \"aodh-383e-account-create-update-f4p7m\" (UID: \"20df2a95-c9b4-4cee-95a5-9a7481aed963\") " pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.204631 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc4fk\" (UniqueName: \"kubernetes.io/projected/181fc32a-cc08-4e8c-8f05-b532e505f0df-kube-api-access-gc4fk\") pod \"aodh-db-create-2wzww\" (UID: \"181fc32a-cc08-4e8c-8f05-b532e505f0df\") " pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.204687 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20df2a95-c9b4-4cee-95a5-9a7481aed963-operator-scripts\") pod \"aodh-383e-account-create-update-f4p7m\" (UID: \"20df2a95-c9b4-4cee-95a5-9a7481aed963\") " pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.204820 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181fc32a-cc08-4e8c-8f05-b532e505f0df-operator-scripts\") pod \"aodh-db-create-2wzww\" (UID: \"181fc32a-cc08-4e8c-8f05-b532e505f0df\") " pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.204859 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg54w\" (UniqueName: \"kubernetes.io/projected/20df2a95-c9b4-4cee-95a5-9a7481aed963-kube-api-access-fg54w\") pod \"aodh-383e-account-create-update-f4p7m\" (UID: \"20df2a95-c9b4-4cee-95a5-9a7481aed963\") " pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.205951 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20df2a95-c9b4-4cee-95a5-9a7481aed963-operator-scripts\") pod \"aodh-383e-account-create-update-f4p7m\" (UID: \"20df2a95-c9b4-4cee-95a5-9a7481aed963\") " pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.209799 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181fc32a-cc08-4e8c-8f05-b532e505f0df-operator-scripts\") pod \"aodh-db-create-2wzww\" (UID: \"181fc32a-cc08-4e8c-8f05-b532e505f0df\") " pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.234704 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc4fk\" (UniqueName: \"kubernetes.io/projected/181fc32a-cc08-4e8c-8f05-b532e505f0df-kube-api-access-gc4fk\") pod \"aodh-db-create-2wzww\" (UID: \"181fc32a-cc08-4e8c-8f05-b532e505f0df\") " pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.237651 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg54w\" (UniqueName: \"kubernetes.io/projected/20df2a95-c9b4-4cee-95a5-9a7481aed963-kube-api-access-fg54w\") pod \"aodh-383e-account-create-update-f4p7m\" (UID: \"20df2a95-c9b4-4cee-95a5-9a7481aed963\") " pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.284553 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.235:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.285274 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.235:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.304475 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.318444 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.341371 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.369284 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.515708 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-svc\") pod \"d3283562-95fd-4595-932e-cf95b3bdd769\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.517343 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh5cg\" (UniqueName: \"kubernetes.io/projected/d3283562-95fd-4595-932e-cf95b3bdd769-kube-api-access-wh5cg\") pod \"d3283562-95fd-4595-932e-cf95b3bdd769\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.517471 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-config\") pod \"d3283562-95fd-4595-932e-cf95b3bdd769\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.517604 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-nb\") pod \"d3283562-95fd-4595-932e-cf95b3bdd769\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.517828 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-swift-storage-0\") pod \"d3283562-95fd-4595-932e-cf95b3bdd769\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.518067 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-sb\") pod \"d3283562-95fd-4595-932e-cf95b3bdd769\" (UID: \"d3283562-95fd-4595-932e-cf95b3bdd769\") " Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.545040 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3283562-95fd-4595-932e-cf95b3bdd769-kube-api-access-wh5cg" (OuterVolumeSpecName: "kube-api-access-wh5cg") pod "d3283562-95fd-4595-932e-cf95b3bdd769" (UID: "d3283562-95fd-4595-932e-cf95b3bdd769"). InnerVolumeSpecName "kube-api-access-wh5cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.622759 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh5cg\" (UniqueName: \"kubernetes.io/projected/d3283562-95fd-4595-932e-cf95b3bdd769-kube-api-access-wh5cg\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.689357 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerStarted","Data":"8c98912d53881a96c88f3c2602c1529a6f3d9332e520ce8822352b79a403929b"} Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.694362 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8c2c683-b91a-4212-8857-4650c6d78bd3","Type":"ContainerStarted","Data":"f23e7bf0110bafa997c2bc788a03491fd2a622fb7d6fa674c8c02edaa52eb70e"} Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.701508 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3283562-95fd-4595-932e-cf95b3bdd769" (UID: "d3283562-95fd-4595-932e-cf95b3bdd769"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.719500 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d3283562-95fd-4595-932e-cf95b3bdd769" (UID: "d3283562-95fd-4595-932e-cf95b3bdd769"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.721542 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.722159 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-wjgkl" event={"ID":"d3283562-95fd-4595-932e-cf95b3bdd769","Type":"ContainerDied","Data":"e7f62fc9b213c35ec9ec383e8579bb5a580ec9d174863ac67f98dccf21d2344f"} Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.722217 4874 scope.go:117] "RemoveContainer" containerID="03dfd50f100bf00d0cba04e3c2f0676d778b83aca8ce2b98420b133b2a336636" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.723792 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-config" (OuterVolumeSpecName: "config") pod "d3283562-95fd-4595-932e-cf95b3bdd769" (UID: "d3283562-95fd-4595-932e-cf95b3bdd769"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.728292 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d3283562-95fd-4595-932e-cf95b3bdd769" (UID: "d3283562-95fd-4595-932e-cf95b3bdd769"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.731162 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.731190 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.731201 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.731211 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.735134 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3283562-95fd-4595-932e-cf95b3bdd769" (UID: "d3283562-95fd-4595-932e-cf95b3bdd769"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.835627 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3283562-95fd-4595-932e-cf95b3bdd769-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:49 crc kubenswrapper[4874]: I0217 16:26:49.906068 4874 scope.go:117] "RemoveContainer" containerID="cc446b11c68caa15068816519ca2b04d3ea13c42ef9fda2ec3706340878daca5" Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.073002 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-wjgkl"] Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.090986 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-wjgkl"] Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.188420 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-2wzww"] Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.455605 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-383e-account-create-update-f4p7m"] Feb 17 16:26:50 crc kubenswrapper[4874]: W0217 16:26:50.466061 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20df2a95_c9b4_4cee_95a5_9a7481aed963.slice/crio-91afad6d18a4f66db8df1f7a337aac4d06bebfa4280833ec0d21e97d316a8ef9 WatchSource:0}: Error finding container 91afad6d18a4f66db8df1f7a337aac4d06bebfa4280833ec0d21e97d316a8ef9: Status 404 returned error can't find the container with id 91afad6d18a4f66db8df1f7a337aac4d06bebfa4280833ec0d21e97d316a8ef9 Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.476049 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3283562-95fd-4595-932e-cf95b3bdd769" path="/var/lib/kubelet/pods/d3283562-95fd-4595-932e-cf95b3bdd769/volumes" Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.731868 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-383e-account-create-update-f4p7m" event={"ID":"20df2a95-c9b4-4cee-95a5-9a7481aed963","Type":"ContainerStarted","Data":"68fa4eb6c5eab571b0b55fe728595aaa047aaf4964c0ebf1a014255cd9bbc17a"} Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.732229 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-383e-account-create-update-f4p7m" event={"ID":"20df2a95-c9b4-4cee-95a5-9a7481aed963","Type":"ContainerStarted","Data":"91afad6d18a4f66db8df1f7a337aac4d06bebfa4280833ec0d21e97d316a8ef9"} Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.733921 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-2wzww" event={"ID":"181fc32a-cc08-4e8c-8f05-b532e505f0df","Type":"ContainerStarted","Data":"a96475316bb7916bb2330be95cc6d84b0388043f9f00d2e09e62a634b207e9f8"} Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.733957 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-2wzww" event={"ID":"181fc32a-cc08-4e8c-8f05-b532e505f0df","Type":"ContainerStarted","Data":"cdeaf04589af0e422ac9d41e2e495d60d3c14afc69ca0419cacc5c5099cd25e1"} Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.738842 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8c2c683-b91a-4212-8857-4650c6d78bd3","Type":"ContainerStarted","Data":"6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9"} Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.738887 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8c2c683-b91a-4212-8857-4650c6d78bd3","Type":"ContainerStarted","Data":"6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34"} Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.743583 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerStarted","Data":"b534344785ae19bbd891b22b9563f656ef776e0e9f0a31871569a57cad6dd275"} Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.756650 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-383e-account-create-update-f4p7m" podStartSLOduration=1.756622017 podStartE2EDuration="1.756622017s" podCreationTimestamp="2026-02-17 16:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:50.74817143 +0000 UTC m=+1421.042560001" watchObservedRunningTime="2026-02-17 16:26:50.756622017 +0000 UTC m=+1421.051010578" Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.779319 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-2wzww" podStartSLOduration=2.779300981 podStartE2EDuration="2.779300981s" podCreationTimestamp="2026-02-17 16:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:50.771338786 +0000 UTC m=+1421.065727357" watchObservedRunningTime="2026-02-17 16:26:50.779300981 +0000 UTC m=+1421.073689542" Feb 17 16:26:50 crc kubenswrapper[4874]: I0217 16:26:50.799823 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.799802471 podStartE2EDuration="3.799802471s" podCreationTimestamp="2026-02-17 16:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:50.788849084 +0000 UTC m=+1421.083237645" watchObservedRunningTime="2026-02-17 16:26:50.799802471 +0000 UTC m=+1421.094191032" Feb 17 16:26:51 crc kubenswrapper[4874]: I0217 16:26:51.797976 4874 generic.go:334] "Generic (PLEG): container finished" podID="820dffc3-fb0f-4dd2-b9bc-a680d02a84d9" containerID="4febb0c9c517a213914dfb27dd0c6bc087f3a254c5aeb2ed1ffcf741ab199284" exitCode=0 Feb 17 16:26:51 crc kubenswrapper[4874]: I0217 16:26:51.798507 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lnpsd" event={"ID":"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9","Type":"ContainerDied","Data":"4febb0c9c517a213914dfb27dd0c6bc087f3a254c5aeb2ed1ffcf741ab199284"} Feb 17 16:26:51 crc kubenswrapper[4874]: I0217 16:26:51.800745 4874 generic.go:334] "Generic (PLEG): container finished" podID="74d95d6d-ef3c-4154-a40d-5bee661b7d56" containerID="bdb53e0a5adb7c4624709ae418e3349df97de66d9507cf6ea08e45046bb785e0" exitCode=0 Feb 17 16:26:51 crc kubenswrapper[4874]: I0217 16:26:51.800896 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-78mdm" event={"ID":"74d95d6d-ef3c-4154-a40d-5bee661b7d56","Type":"ContainerDied","Data":"bdb53e0a5adb7c4624709ae418e3349df97de66d9507cf6ea08e45046bb785e0"} Feb 17 16:26:51 crc kubenswrapper[4874]: I0217 16:26:51.803330 4874 generic.go:334] "Generic (PLEG): container finished" podID="20df2a95-c9b4-4cee-95a5-9a7481aed963" containerID="68fa4eb6c5eab571b0b55fe728595aaa047aaf4964c0ebf1a014255cd9bbc17a" exitCode=0 Feb 17 16:26:51 crc kubenswrapper[4874]: I0217 16:26:51.803466 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-383e-account-create-update-f4p7m" event={"ID":"20df2a95-c9b4-4cee-95a5-9a7481aed963","Type":"ContainerDied","Data":"68fa4eb6c5eab571b0b55fe728595aaa047aaf4964c0ebf1a014255cd9bbc17a"} Feb 17 16:26:51 crc kubenswrapper[4874]: I0217 16:26:51.810658 4874 generic.go:334] "Generic (PLEG): container finished" podID="181fc32a-cc08-4e8c-8f05-b532e505f0df" containerID="a96475316bb7916bb2330be95cc6d84b0388043f9f00d2e09e62a634b207e9f8" exitCode=0 Feb 17 16:26:51 crc kubenswrapper[4874]: I0217 16:26:51.811609 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-2wzww" event={"ID":"181fc32a-cc08-4e8c-8f05-b532e505f0df","Type":"ContainerDied","Data":"a96475316bb7916bb2330be95cc6d84b0388043f9f00d2e09e62a634b207e9f8"} Feb 17 16:26:52 crc kubenswrapper[4874]: I0217 16:26:52.822890 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerStarted","Data":"15d4bf913c291fae0241e3c755ed5ee968b6384a99744ea93e2c98f96bde90b9"} Feb 17 16:26:52 crc kubenswrapper[4874]: I0217 16:26:52.863462 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.851580567 podStartE2EDuration="6.863446116s" podCreationTimestamp="2026-02-17 16:26:46 +0000 UTC" firstStartedPulling="2026-02-17 16:26:47.568995259 +0000 UTC m=+1417.863383820" lastFinishedPulling="2026-02-17 16:26:51.580860808 +0000 UTC m=+1421.875249369" observedRunningTime="2026-02-17 16:26:52.862200395 +0000 UTC m=+1423.156588976" watchObservedRunningTime="2026-02-17 16:26:52.863446116 +0000 UTC m=+1423.157834677" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.157937 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.158004 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.433957 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.536971 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181fc32a-cc08-4e8c-8f05-b532e505f0df-operator-scripts\") pod \"181fc32a-cc08-4e8c-8f05-b532e505f0df\" (UID: \"181fc32a-cc08-4e8c-8f05-b532e505f0df\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.537312 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc4fk\" (UniqueName: \"kubernetes.io/projected/181fc32a-cc08-4e8c-8f05-b532e505f0df-kube-api-access-gc4fk\") pod \"181fc32a-cc08-4e8c-8f05-b532e505f0df\" (UID: \"181fc32a-cc08-4e8c-8f05-b532e505f0df\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.537621 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/181fc32a-cc08-4e8c-8f05-b532e505f0df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "181fc32a-cc08-4e8c-8f05-b532e505f0df" (UID: "181fc32a-cc08-4e8c-8f05-b532e505f0df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.538031 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181fc32a-cc08-4e8c-8f05-b532e505f0df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.544372 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181fc32a-cc08-4e8c-8f05-b532e505f0df-kube-api-access-gc4fk" (OuterVolumeSpecName: "kube-api-access-gc4fk") pod "181fc32a-cc08-4e8c-8f05-b532e505f0df" (UID: "181fc32a-cc08-4e8c-8f05-b532e505f0df"). InnerVolumeSpecName "kube-api-access-gc4fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.587321 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-78b7864799-6ls5l" podUID="fb7283b1-4828-4a90-bdd2-6861b7d6475b" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.213:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.639874 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc4fk\" (UniqueName: \"kubernetes.io/projected/181fc32a-cc08-4e8c-8f05-b532e505f0df-kube-api-access-gc4fk\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: E0217 16:26:53.718607 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3\": RecentStats: unable to find data in memory cache]" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.766560 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.787734 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.789029 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.851676 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-combined-ca-bundle\") pod \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.851794 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-config-data\") pod \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.851845 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20df2a95-c9b4-4cee-95a5-9a7481aed963-operator-scripts\") pod \"20df2a95-c9b4-4cee-95a5-9a7481aed963\" (UID: \"20df2a95-c9b4-4cee-95a5-9a7481aed963\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.851901 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-config-data\") pod \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.852015 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2rk5\" (UniqueName: \"kubernetes.io/projected/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-kube-api-access-f2rk5\") pod \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.852036 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-combined-ca-bundle\") pod \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.854424 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20df2a95-c9b4-4cee-95a5-9a7481aed963-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20df2a95-c9b4-4cee-95a5-9a7481aed963" (UID: "20df2a95-c9b4-4cee-95a5-9a7481aed963"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.854501 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c26w\" (UniqueName: \"kubernetes.io/projected/74d95d6d-ef3c-4154-a40d-5bee661b7d56-kube-api-access-6c26w\") pod \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.854550 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-scripts\") pod \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\" (UID: \"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.854597 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg54w\" (UniqueName: \"kubernetes.io/projected/20df2a95-c9b4-4cee-95a5-9a7481aed963-kube-api-access-fg54w\") pod \"20df2a95-c9b4-4cee-95a5-9a7481aed963\" (UID: \"20df2a95-c9b4-4cee-95a5-9a7481aed963\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.854628 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-scripts\") pod \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\" (UID: \"74d95d6d-ef3c-4154-a40d-5bee661b7d56\") " Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.863396 4874 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20df2a95-c9b4-4cee-95a5-9a7481aed963-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.867833 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d95d6d-ef3c-4154-a40d-5bee661b7d56-kube-api-access-6c26w" (OuterVolumeSpecName: "kube-api-access-6c26w") pod "74d95d6d-ef3c-4154-a40d-5bee661b7d56" (UID: "74d95d6d-ef3c-4154-a40d-5bee661b7d56"). InnerVolumeSpecName "kube-api-access-6c26w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.868508 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-kube-api-access-f2rk5" (OuterVolumeSpecName: "kube-api-access-f2rk5") pod "820dffc3-fb0f-4dd2-b9bc-a680d02a84d9" (UID: "820dffc3-fb0f-4dd2-b9bc-a680d02a84d9"). InnerVolumeSpecName "kube-api-access-f2rk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.869157 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-scripts" (OuterVolumeSpecName: "scripts") pod "820dffc3-fb0f-4dd2-b9bc-a680d02a84d9" (UID: "820dffc3-fb0f-4dd2-b9bc-a680d02a84d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.869299 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-scripts" (OuterVolumeSpecName: "scripts") pod "74d95d6d-ef3c-4154-a40d-5bee661b7d56" (UID: "74d95d6d-ef3c-4154-a40d-5bee661b7d56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.870835 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20df2a95-c9b4-4cee-95a5-9a7481aed963-kube-api-access-fg54w" (OuterVolumeSpecName: "kube-api-access-fg54w") pod "20df2a95-c9b4-4cee-95a5-9a7481aed963" (UID: "20df2a95-c9b4-4cee-95a5-9a7481aed963"). InnerVolumeSpecName "kube-api-access-fg54w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.894616 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lnpsd" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.894617 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lnpsd" event={"ID":"820dffc3-fb0f-4dd2-b9bc-a680d02a84d9","Type":"ContainerDied","Data":"e0d48b903cbfdf332bced3f5438f1f9fa287781d648620b9b7515daae988b91e"} Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.895141 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0d48b903cbfdf332bced3f5438f1f9fa287781d648620b9b7515daae988b91e" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.902173 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:26:53 crc kubenswrapper[4874]: E0217 16:26:53.902762 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="820dffc3-fb0f-4dd2-b9bc-a680d02a84d9" containerName="nova-cell1-conductor-db-sync" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.902827 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="820dffc3-fb0f-4dd2-b9bc-a680d02a84d9" containerName="nova-cell1-conductor-db-sync" Feb 17 16:26:53 crc kubenswrapper[4874]: E0217 16:26:53.902846 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3283562-95fd-4595-932e-cf95b3bdd769" containerName="init" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.902855 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3283562-95fd-4595-932e-cf95b3bdd769" containerName="init" Feb 17 16:26:53 crc kubenswrapper[4874]: E0217 16:26:53.902884 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20df2a95-c9b4-4cee-95a5-9a7481aed963" containerName="mariadb-account-create-update" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.902892 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="20df2a95-c9b4-4cee-95a5-9a7481aed963" containerName="mariadb-account-create-update" Feb 17 16:26:53 crc kubenswrapper[4874]: E0217 16:26:53.902914 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d95d6d-ef3c-4154-a40d-5bee661b7d56" containerName="nova-manage" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.902924 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d95d6d-ef3c-4154-a40d-5bee661b7d56" containerName="nova-manage" Feb 17 16:26:53 crc kubenswrapper[4874]: E0217 16:26:53.902938 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3283562-95fd-4595-932e-cf95b3bdd769" containerName="dnsmasq-dns" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.902945 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3283562-95fd-4595-932e-cf95b3bdd769" containerName="dnsmasq-dns" Feb 17 16:26:53 crc kubenswrapper[4874]: E0217 16:26:53.902977 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181fc32a-cc08-4e8c-8f05-b532e505f0df" containerName="mariadb-database-create" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.902986 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="181fc32a-cc08-4e8c-8f05-b532e505f0df" containerName="mariadb-database-create" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.903298 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3283562-95fd-4595-932e-cf95b3bdd769" containerName="dnsmasq-dns" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.903319 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="181fc32a-cc08-4e8c-8f05-b532e505f0df" containerName="mariadb-database-create" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.903331 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d95d6d-ef3c-4154-a40d-5bee661b7d56" containerName="nova-manage" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.903344 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="820dffc3-fb0f-4dd2-b9bc-a680d02a84d9" containerName="nova-cell1-conductor-db-sync" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.903376 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="20df2a95-c9b4-4cee-95a5-9a7481aed963" containerName="mariadb-account-create-update" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.907446 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-78mdm" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.909374 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-78mdm" event={"ID":"74d95d6d-ef3c-4154-a40d-5bee661b7d56","Type":"ContainerDied","Data":"b0ab965ed1978d1418a7656d4083749c2323934c4220055530cc29c26671284b"} Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.909428 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ab965ed1978d1418a7656d4083749c2323934c4220055530cc29c26671284b" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.909530 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.915602 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "820dffc3-fb0f-4dd2-b9bc-a680d02a84d9" (UID: "820dffc3-fb0f-4dd2-b9bc-a680d02a84d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.924920 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-383e-account-create-update-f4p7m" event={"ID":"20df2a95-c9b4-4cee-95a5-9a7481aed963","Type":"ContainerDied","Data":"91afad6d18a4f66db8df1f7a337aac4d06bebfa4280833ec0d21e97d316a8ef9"} Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.924982 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-383e-account-create-update-f4p7m" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.924962 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91afad6d18a4f66db8df1f7a337aac4d06bebfa4280833ec0d21e97d316a8ef9" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.928998 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-2wzww" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.929160 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-2wzww" event={"ID":"181fc32a-cc08-4e8c-8f05-b532e505f0df","Type":"ContainerDied","Data":"cdeaf04589af0e422ac9d41e2e495d60d3c14afc69ca0419cacc5c5099cd25e1"} Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.929216 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdeaf04589af0e422ac9d41e2e495d60d3c14afc69ca0419cacc5c5099cd25e1" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.929242 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.941127 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-config-data" (OuterVolumeSpecName: "config-data") pod "820dffc3-fb0f-4dd2-b9bc-a680d02a84d9" (UID: "820dffc3-fb0f-4dd2-b9bc-a680d02a84d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.952143 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.953894 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-config-data" (OuterVolumeSpecName: "config-data") pod "74d95d6d-ef3c-4154-a40d-5bee661b7d56" (UID: "74d95d6d-ef3c-4154-a40d-5bee661b7d56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.972680 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2rk5\" (UniqueName: \"kubernetes.io/projected/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-kube-api-access-f2rk5\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.972715 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c26w\" (UniqueName: \"kubernetes.io/projected/74d95d6d-ef3c-4154-a40d-5bee661b7d56-kube-api-access-6c26w\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.973013 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.973063 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg54w\" (UniqueName: \"kubernetes.io/projected/20df2a95-c9b4-4cee-95a5-9a7481aed963-kube-api-access-fg54w\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.973125 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.973139 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.973151 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.973165 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:53 crc kubenswrapper[4874]: I0217 16:26:53.980371 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74d95d6d-ef3c-4154-a40d-5bee661b7d56" (UID: "74d95d6d-ef3c-4154-a40d-5bee661b7d56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.035949 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.036229 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-log" containerID="cri-o://01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67" gracePeriod=30 Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.036693 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-api" containerID="cri-o://dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e" gracePeriod=30 Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.073910 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.074181 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3b3d858a-3158-4d4b-81d3-ef898bb8695f" containerName="nova-scheduler-scheduler" containerID="cri-o://5b8265f3dfc4c05582c971c09dfe48cb8d55d16bb6da0dcc4e839f7c516a2d35" gracePeriod=30 Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.075746 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kznk7\" (UniqueName: \"kubernetes.io/projected/23118d30-bfc5-46b8-aaf6-b14b263104c9-kube-api-access-kznk7\") pod \"nova-cell1-conductor-0\" (UID: \"23118d30-bfc5-46b8-aaf6-b14b263104c9\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.075787 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23118d30-bfc5-46b8-aaf6-b14b263104c9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"23118d30-bfc5-46b8-aaf6-b14b263104c9\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.075840 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23118d30-bfc5-46b8-aaf6-b14b263104c9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"23118d30-bfc5-46b8-aaf6-b14b263104c9\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.076021 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d95d6d-ef3c-4154-a40d-5bee661b7d56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.091715 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.091948 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerName="nova-metadata-log" containerID="cri-o://6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34" gracePeriod=30 Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.092034 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerName="nova-metadata-metadata" containerID="cri-o://6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9" gracePeriod=30 Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.178472 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kznk7\" (UniqueName: \"kubernetes.io/projected/23118d30-bfc5-46b8-aaf6-b14b263104c9-kube-api-access-kznk7\") pod \"nova-cell1-conductor-0\" (UID: \"23118d30-bfc5-46b8-aaf6-b14b263104c9\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.178531 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23118d30-bfc5-46b8-aaf6-b14b263104c9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"23118d30-bfc5-46b8-aaf6-b14b263104c9\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.178580 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23118d30-bfc5-46b8-aaf6-b14b263104c9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"23118d30-bfc5-46b8-aaf6-b14b263104c9\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.186777 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23118d30-bfc5-46b8-aaf6-b14b263104c9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"23118d30-bfc5-46b8-aaf6-b14b263104c9\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.186872 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23118d30-bfc5-46b8-aaf6-b14b263104c9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"23118d30-bfc5-46b8-aaf6-b14b263104c9\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.205175 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kznk7\" (UniqueName: \"kubernetes.io/projected/23118d30-bfc5-46b8-aaf6-b14b263104c9-kube-api-access-kznk7\") pod \"nova-cell1-conductor-0\" (UID: \"23118d30-bfc5-46b8-aaf6-b14b263104c9\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.250890 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.769974 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.943799 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.946236 4874 generic.go:334] "Generic (PLEG): container finished" podID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerID="6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9" exitCode=0 Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.946259 4874 generic.go:334] "Generic (PLEG): container finished" podID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerID="6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34" exitCode=143 Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.946320 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8c2c683-b91a-4212-8857-4650c6d78bd3","Type":"ContainerDied","Data":"6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9"} Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.946362 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8c2c683-b91a-4212-8857-4650c6d78bd3","Type":"ContainerDied","Data":"6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34"} Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.946379 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8c2c683-b91a-4212-8857-4650c6d78bd3","Type":"ContainerDied","Data":"f23e7bf0110bafa997c2bc788a03491fd2a622fb7d6fa674c8c02edaa52eb70e"} Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.946395 4874 scope.go:117] "RemoveContainer" containerID="6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9" Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.949341 4874 generic.go:334] "Generic (PLEG): container finished" podID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerID="01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67" exitCode=143 Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.949422 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f40c4b93-cd36-443a-b3c2-b2afa825606b","Type":"ContainerDied","Data":"01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67"} Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.950995 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"23118d30-bfc5-46b8-aaf6-b14b263104c9","Type":"ContainerStarted","Data":"a5a90fcf4ac10c1ec158b93cf5cf43ab4ec7117d8c48b21b0298a17bba577088"} Feb 17 16:26:54 crc kubenswrapper[4874]: I0217 16:26:54.995625 4874 scope.go:117] "RemoveContainer" containerID="6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.050175 4874 scope.go:117] "RemoveContainer" containerID="6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9" Feb 17 16:26:55 crc kubenswrapper[4874]: E0217 16:26:55.052312 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9\": container with ID starting with 6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9 not found: ID does not exist" containerID="6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.052356 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9"} err="failed to get container status \"6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9\": rpc error: code = NotFound desc = could not find container \"6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9\": container with ID starting with 6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9 not found: ID does not exist" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.052383 4874 scope.go:117] "RemoveContainer" containerID="6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34" Feb 17 16:26:55 crc kubenswrapper[4874]: E0217 16:26:55.053948 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34\": container with ID starting with 6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34 not found: ID does not exist" containerID="6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.053980 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34"} err="failed to get container status \"6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34\": rpc error: code = NotFound desc = could not find container \"6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34\": container with ID starting with 6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34 not found: ID does not exist" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.053995 4874 scope.go:117] "RemoveContainer" containerID="6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.055137 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9"} err="failed to get container status \"6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9\": rpc error: code = NotFound desc = could not find container \"6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9\": container with ID starting with 6e45c6e0f084ebddc36f39db7672338233cf057c39d8ccfbdfb9726905d270c9 not found: ID does not exist" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.055172 4874 scope.go:117] "RemoveContainer" containerID="6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.058317 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34"} err="failed to get container status \"6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34\": rpc error: code = NotFound desc = could not find container \"6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34\": container with ID starting with 6a4118c9e2b9447e98a7f50f1d14f68b795aa9664a19b68dba4a16f08efd7e34 not found: ID does not exist" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.107466 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv6pk\" (UniqueName: \"kubernetes.io/projected/d8c2c683-b91a-4212-8857-4650c6d78bd3-kube-api-access-lv6pk\") pod \"d8c2c683-b91a-4212-8857-4650c6d78bd3\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.107503 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c683-b91a-4212-8857-4650c6d78bd3-logs\") pod \"d8c2c683-b91a-4212-8857-4650c6d78bd3\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.107552 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-combined-ca-bundle\") pod \"d8c2c683-b91a-4212-8857-4650c6d78bd3\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.107720 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-config-data\") pod \"d8c2c683-b91a-4212-8857-4650c6d78bd3\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.107801 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-nova-metadata-tls-certs\") pod \"d8c2c683-b91a-4212-8857-4650c6d78bd3\" (UID: \"d8c2c683-b91a-4212-8857-4650c6d78bd3\") " Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.110549 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8c2c683-b91a-4212-8857-4650c6d78bd3-logs" (OuterVolumeSpecName: "logs") pod "d8c2c683-b91a-4212-8857-4650c6d78bd3" (UID: "d8c2c683-b91a-4212-8857-4650c6d78bd3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.112901 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c2c683-b91a-4212-8857-4650c6d78bd3-kube-api-access-lv6pk" (OuterVolumeSpecName: "kube-api-access-lv6pk") pod "d8c2c683-b91a-4212-8857-4650c6d78bd3" (UID: "d8c2c683-b91a-4212-8857-4650c6d78bd3"). InnerVolumeSpecName "kube-api-access-lv6pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.148310 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-config-data" (OuterVolumeSpecName: "config-data") pod "d8c2c683-b91a-4212-8857-4650c6d78bd3" (UID: "d8c2c683-b91a-4212-8857-4650c6d78bd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.148426 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8c2c683-b91a-4212-8857-4650c6d78bd3" (UID: "d8c2c683-b91a-4212-8857-4650c6d78bd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.178302 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d8c2c683-b91a-4212-8857-4650c6d78bd3" (UID: "d8c2c683-b91a-4212-8857-4650c6d78bd3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.211028 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv6pk\" (UniqueName: \"kubernetes.io/projected/d8c2c683-b91a-4212-8857-4650c6d78bd3-kube-api-access-lv6pk\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.211074 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c683-b91a-4212-8857-4650c6d78bd3-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.211104 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.211116 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.211127 4874 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8c2c683-b91a-4212-8857-4650c6d78bd3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.963156 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.965195 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"23118d30-bfc5-46b8-aaf6-b14b263104c9","Type":"ContainerStarted","Data":"9cac2a801950bada163b0283a771d29e0b174596e24c10597fecce8dd5f8fc26"} Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.965349 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:55 crc kubenswrapper[4874]: I0217 16:26:55.993616 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.99359871 podStartE2EDuration="2.99359871s" podCreationTimestamp="2026-02-17 16:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:55.988090605 +0000 UTC m=+1426.282479186" watchObservedRunningTime="2026-02-17 16:26:55.99359871 +0000 UTC m=+1426.287987271" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.017285 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.079254 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.103449 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:56 crc kubenswrapper[4874]: E0217 16:26:56.104135 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerName="nova-metadata-metadata" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.104165 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerName="nova-metadata-metadata" Feb 17 16:26:56 crc kubenswrapper[4874]: E0217 16:26:56.104216 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerName="nova-metadata-log" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.104230 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerName="nova-metadata-log" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.104554 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerName="nova-metadata-log" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.104598 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c2c683-b91a-4212-8857-4650c6d78bd3" containerName="nova-metadata-metadata" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.106415 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.109880 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.110128 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.119959 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.248588 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.248648 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.248685 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b101148a-34d1-4cff-949a-0432ee3225b1-logs\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.248741 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-656fj\" (UniqueName: \"kubernetes.io/projected/b101148a-34d1-4cff-949a-0432ee3225b1-kube-api-access-656fj\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.248838 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-config-data\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.352595 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-config-data\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.352710 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.352752 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.352796 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b101148a-34d1-4cff-949a-0432ee3225b1-logs\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.352870 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-656fj\" (UniqueName: \"kubernetes.io/projected/b101148a-34d1-4cff-949a-0432ee3225b1-kube-api-access-656fj\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.354004 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b101148a-34d1-4cff-949a-0432ee3225b1-logs\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.360462 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.361227 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.367808 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-config-data\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.369518 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-656fj\" (UniqueName: \"kubernetes.io/projected/b101148a-34d1-4cff-949a-0432ee3225b1-kube-api-access-656fj\") pod \"nova-metadata-0\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.427379 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.481573 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c2c683-b91a-4212-8857-4650c6d78bd3" path="/var/lib/kubelet/pods/d8c2c683-b91a-4212-8857-4650c6d78bd3/volumes" Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.927922 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:56 crc kubenswrapper[4874]: I0217 16:26:56.982631 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b101148a-34d1-4cff-949a-0432ee3225b1","Type":"ContainerStarted","Data":"c218000637d54061efce014286d64f1cb601ea3eb618838539370b1e11994463"} Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.714202 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.724890 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.724968 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.885447 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-config-data\") pod \"f40c4b93-cd36-443a-b3c2-b2afa825606b\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.885502 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40c4b93-cd36-443a-b3c2-b2afa825606b-logs\") pod \"f40c4b93-cd36-443a-b3c2-b2afa825606b\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.885574 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-combined-ca-bundle\") pod \"f40c4b93-cd36-443a-b3c2-b2afa825606b\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.885762 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz8lf\" (UniqueName: \"kubernetes.io/projected/f40c4b93-cd36-443a-b3c2-b2afa825606b-kube-api-access-gz8lf\") pod \"f40c4b93-cd36-443a-b3c2-b2afa825606b\" (UID: \"f40c4b93-cd36-443a-b3c2-b2afa825606b\") " Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.886260 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f40c4b93-cd36-443a-b3c2-b2afa825606b-logs" (OuterVolumeSpecName: "logs") pod "f40c4b93-cd36-443a-b3c2-b2afa825606b" (UID: "f40c4b93-cd36-443a-b3c2-b2afa825606b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.886651 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40c4b93-cd36-443a-b3c2-b2afa825606b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.891039 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40c4b93-cd36-443a-b3c2-b2afa825606b-kube-api-access-gz8lf" (OuterVolumeSpecName: "kube-api-access-gz8lf") pod "f40c4b93-cd36-443a-b3c2-b2afa825606b" (UID: "f40c4b93-cd36-443a-b3c2-b2afa825606b"). InnerVolumeSpecName "kube-api-access-gz8lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.924399 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-config-data" (OuterVolumeSpecName: "config-data") pod "f40c4b93-cd36-443a-b3c2-b2afa825606b" (UID: "f40c4b93-cd36-443a-b3c2-b2afa825606b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.926130 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f40c4b93-cd36-443a-b3c2-b2afa825606b" (UID: "f40c4b93-cd36-443a-b3c2-b2afa825606b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.992460 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b101148a-34d1-4cff-949a-0432ee3225b1","Type":"ContainerStarted","Data":"734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7"} Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.992488 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz8lf\" (UniqueName: \"kubernetes.io/projected/f40c4b93-cd36-443a-b3c2-b2afa825606b-kube-api-access-gz8lf\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.992510 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.992509 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b101148a-34d1-4cff-949a-0432ee3225b1","Type":"ContainerStarted","Data":"e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318"} Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.992520 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40c4b93-cd36-443a-b3c2-b2afa825606b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.994706 4874 generic.go:334] "Generic (PLEG): container finished" podID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerID="dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e" exitCode=0 Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.994749 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f40c4b93-cd36-443a-b3c2-b2afa825606b","Type":"ContainerDied","Data":"dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e"} Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.994782 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f40c4b93-cd36-443a-b3c2-b2afa825606b","Type":"ContainerDied","Data":"683f05c5698ab7c05f4fe015315c8389647932fa9f0c9c5df09fc4b97b9af51a"} Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.994802 4874 scope.go:117] "RemoveContainer" containerID="dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e" Feb 17 16:26:57 crc kubenswrapper[4874]: I0217 16:26:57.994946 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.025437 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.025416128 podStartE2EDuration="2.025416128s" podCreationTimestamp="2026-02-17 16:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:58.019408871 +0000 UTC m=+1428.313797442" watchObservedRunningTime="2026-02-17 16:26:58.025416128 +0000 UTC m=+1428.319804689" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.056381 4874 scope.go:117] "RemoveContainer" containerID="01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.064644 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:58 crc kubenswrapper[4874]: E0217 16:26:58.073727 4874 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b8265f3dfc4c05582c971c09dfe48cb8d55d16bb6da0dcc4e839f7c516a2d35" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.077031 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:58 crc kubenswrapper[4874]: E0217 16:26:58.079494 4874 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b8265f3dfc4c05582c971c09dfe48cb8d55d16bb6da0dcc4e839f7c516a2d35" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:26:58 crc kubenswrapper[4874]: E0217 16:26:58.081415 4874 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b8265f3dfc4c05582c971c09dfe48cb8d55d16bb6da0dcc4e839f7c516a2d35" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:26:58 crc kubenswrapper[4874]: E0217 16:26:58.081495 4874 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3b3d858a-3158-4d4b-81d3-ef898bb8695f" containerName="nova-scheduler-scheduler" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.095687 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:58 crc kubenswrapper[4874]: E0217 16:26:58.096190 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-log" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.096210 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-log" Feb 17 16:26:58 crc kubenswrapper[4874]: E0217 16:26:58.096246 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-api" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.096252 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-api" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.096479 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-log" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.096501 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" containerName="nova-api-api" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.098366 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.098464 4874 scope.go:117] "RemoveContainer" containerID="dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e" Feb 17 16:26:58 crc kubenswrapper[4874]: E0217 16:26:58.098879 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e\": container with ID starting with dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e not found: ID does not exist" containerID="dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.098909 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e"} err="failed to get container status \"dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e\": rpc error: code = NotFound desc = could not find container \"dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e\": container with ID starting with dd0fdb457bf3439f4d0e408555c5fa18ec566da9c02e191a160d8b75d39f9f5e not found: ID does not exist" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.098931 4874 scope.go:117] "RemoveContainer" containerID="01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67" Feb 17 16:26:58 crc kubenswrapper[4874]: E0217 16:26:58.099333 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67\": container with ID starting with 01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67 not found: ID does not exist" containerID="01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.099364 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67"} err="failed to get container status \"01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67\": rpc error: code = NotFound desc = could not find container \"01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67\": container with ID starting with 01dbfb380a663e912cea32acd3a71266d1db60c324f046f5b2ebe4b555c3ac67 not found: ID does not exist" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.103447 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.127725 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.196956 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.197325 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-config-data\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.197459 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03bdb33-2317-487a-9566-10fbe37a9bc4-logs\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.197484 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gsgx\" (UniqueName: \"kubernetes.io/projected/d03bdb33-2317-487a-9566-10fbe37a9bc4-kube-api-access-8gsgx\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.299664 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03bdb33-2317-487a-9566-10fbe37a9bc4-logs\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.299739 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gsgx\" (UniqueName: \"kubernetes.io/projected/d03bdb33-2317-487a-9566-10fbe37a9bc4-kube-api-access-8gsgx\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.299984 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.300226 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-config-data\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.300523 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03bdb33-2317-487a-9566-10fbe37a9bc4-logs\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.304428 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-config-data\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.304844 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.315353 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gsgx\" (UniqueName: \"kubernetes.io/projected/d03bdb33-2317-487a-9566-10fbe37a9bc4-kube-api-access-8gsgx\") pod \"nova-api-0\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.423504 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4874]: I0217 16:26:58.479029 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40c4b93-cd36-443a-b3c2-b2afa825606b" path="/var/lib/kubelet/pods/f40c4b93-cd36-443a-b3c2-b2afa825606b/volumes" Feb 17 16:26:58 crc kubenswrapper[4874]: E0217 16:26:58.816715 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b3d858a_3158_4d4b_81d3_ef898bb8695f.slice/crio-5b8265f3dfc4c05582c971c09dfe48cb8d55d16bb6da0dcc4e839f7c516a2d35.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.038203 4874 generic.go:334] "Generic (PLEG): container finished" podID="3b3d858a-3158-4d4b-81d3-ef898bb8695f" containerID="5b8265f3dfc4c05582c971c09dfe48cb8d55d16bb6da0dcc4e839f7c516a2d35" exitCode=0 Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.038413 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3b3d858a-3158-4d4b-81d3-ef898bb8695f","Type":"ContainerDied","Data":"5b8265f3dfc4c05582c971c09dfe48cb8d55d16bb6da0dcc4e839f7c516a2d35"} Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.055670 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:59 crc kubenswrapper[4874]: W0217 16:26:59.068365 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd03bdb33_2317_487a_9566_10fbe37a9bc4.slice/crio-7378673f575b5ee86663fbb342a54684e8ab9ecc15e6a970cf5be467a12beaf2 WatchSource:0}: Error finding container 7378673f575b5ee86663fbb342a54684e8ab9ecc15e6a970cf5be467a12beaf2: Status 404 returned error can't find the container with id 7378673f575b5ee86663fbb342a54684e8ab9ecc15e6a970cf5be467a12beaf2 Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.271481 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.426539 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-config-data\") pod \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.426884 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-combined-ca-bundle\") pod \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.427109 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k759\" (UniqueName: \"kubernetes.io/projected/3b3d858a-3158-4d4b-81d3-ef898bb8695f-kube-api-access-6k759\") pod \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\" (UID: \"3b3d858a-3158-4d4b-81d3-ef898bb8695f\") " Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.442334 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3d858a-3158-4d4b-81d3-ef898bb8695f-kube-api-access-6k759" (OuterVolumeSpecName: "kube-api-access-6k759") pod "3b3d858a-3158-4d4b-81d3-ef898bb8695f" (UID: "3b3d858a-3158-4d4b-81d3-ef898bb8695f"). InnerVolumeSpecName "kube-api-access-6k759". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.455309 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-dnmbf"] Feb 17 16:26:59 crc kubenswrapper[4874]: E0217 16:26:59.455871 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3d858a-3158-4d4b-81d3-ef898bb8695f" containerName="nova-scheduler-scheduler" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.455891 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3d858a-3158-4d4b-81d3-ef898bb8695f" containerName="nova-scheduler-scheduler" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.456246 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3d858a-3158-4d4b-81d3-ef898bb8695f" containerName="nova-scheduler-scheduler" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.457798 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.460806 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-lsrl9" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.460954 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.461250 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.461439 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.470257 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-config-data" (OuterVolumeSpecName: "config-data") pod "3b3d858a-3158-4d4b-81d3-ef898bb8695f" (UID: "3b3d858a-3158-4d4b-81d3-ef898bb8695f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.478175 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dnmbf"] Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.492031 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b3d858a-3158-4d4b-81d3-ef898bb8695f" (UID: "3b3d858a-3158-4d4b-81d3-ef898bb8695f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.530299 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.530339 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3d858a-3158-4d4b-81d3-ef898bb8695f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.530351 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k759\" (UniqueName: \"kubernetes.io/projected/3b3d858a-3158-4d4b-81d3-ef898bb8695f-kube-api-access-6k759\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.633680 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-scripts\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.633727 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-combined-ca-bundle\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.636066 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-config-data\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.636214 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgpbx\" (UniqueName: \"kubernetes.io/projected/e293c523-929f-4d2e-bf96-091cbed7f12b-kube-api-access-pgpbx\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.738680 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-config-data\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.738768 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgpbx\" (UniqueName: \"kubernetes.io/projected/e293c523-929f-4d2e-bf96-091cbed7f12b-kube-api-access-pgpbx\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.739221 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-scripts\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.739243 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-combined-ca-bundle\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.742962 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-combined-ca-bundle\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.743601 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-config-data\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.744685 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-scripts\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.763251 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgpbx\" (UniqueName: \"kubernetes.io/projected/e293c523-929f-4d2e-bf96-091cbed7f12b-kube-api-access-pgpbx\") pod \"aodh-db-sync-dnmbf\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:26:59 crc kubenswrapper[4874]: I0217 16:26:59.895358 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.063945 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d03bdb33-2317-487a-9566-10fbe37a9bc4","Type":"ContainerStarted","Data":"bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd"} Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.064016 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d03bdb33-2317-487a-9566-10fbe37a9bc4","Type":"ContainerStarted","Data":"98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9"} Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.064041 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d03bdb33-2317-487a-9566-10fbe37a9bc4","Type":"ContainerStarted","Data":"7378673f575b5ee86663fbb342a54684e8ab9ecc15e6a970cf5be467a12beaf2"} Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.067033 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3b3d858a-3158-4d4b-81d3-ef898bb8695f","Type":"ContainerDied","Data":"661150ada7d0b9434be7f12dede2c1db7b7154e54353b3976d76c16f1ab230fd"} Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.067103 4874 scope.go:117] "RemoveContainer" containerID="5b8265f3dfc4c05582c971c09dfe48cb8d55d16bb6da0dcc4e839f7c516a2d35" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.067277 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.092809 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.092791693 podStartE2EDuration="2.092791693s" podCreationTimestamp="2026-02-17 16:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:00.092172248 +0000 UTC m=+1430.386560819" watchObservedRunningTime="2026-02-17 16:27:00.092791693 +0000 UTC m=+1430.387180254" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.151214 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.165457 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.183295 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.185231 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.196483 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.196580 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.354594 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdp6\" (UniqueName: \"kubernetes.io/projected/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-kube-api-access-mkdp6\") pod \"nova-scheduler-0\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.354643 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-config-data\") pod \"nova-scheduler-0\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.354736 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.446839 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dnmbf"] Feb 17 16:27:00 crc kubenswrapper[4874]: W0217 16:27:00.447435 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode293c523_929f_4d2e_bf96_091cbed7f12b.slice/crio-3caf88d289e57a7e893cdaf8e791351a4b57d94b63afd425e596b646d832761c WatchSource:0}: Error finding container 3caf88d289e57a7e893cdaf8e791351a4b57d94b63afd425e596b646d832761c: Status 404 returned error can't find the container with id 3caf88d289e57a7e893cdaf8e791351a4b57d94b63afd425e596b646d832761c Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.457594 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkdp6\" (UniqueName: \"kubernetes.io/projected/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-kube-api-access-mkdp6\") pod \"nova-scheduler-0\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.457653 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-config-data\") pod \"nova-scheduler-0\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.457751 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.470012 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-config-data\") pod \"nova-scheduler-0\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.470125 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.475540 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3d858a-3158-4d4b-81d3-ef898bb8695f" path="/var/lib/kubelet/pods/3b3d858a-3158-4d4b-81d3-ef898bb8695f/volumes" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.481988 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkdp6\" (UniqueName: \"kubernetes.io/projected/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-kube-api-access-mkdp6\") pod \"nova-scheduler-0\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:00 crc kubenswrapper[4874]: I0217 16:27:00.506883 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:27:01 crc kubenswrapper[4874]: I0217 16:27:01.007766 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:01 crc kubenswrapper[4874]: I0217 16:27:01.081373 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnmbf" event={"ID":"e293c523-929f-4d2e-bf96-091cbed7f12b","Type":"ContainerStarted","Data":"3caf88d289e57a7e893cdaf8e791351a4b57d94b63afd425e596b646d832761c"} Feb 17 16:27:01 crc kubenswrapper[4874]: I0217 16:27:01.082589 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5ef5b1fe-9e55-4310-b49b-75334cac9bb7","Type":"ContainerStarted","Data":"75b5a2431674859079367a9d64cda1964d55989d47c84a306148f661f60d486d"} Feb 17 16:27:01 crc kubenswrapper[4874]: I0217 16:27:01.431242 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:27:01 crc kubenswrapper[4874]: I0217 16:27:01.431562 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:27:02 crc kubenswrapper[4874]: I0217 16:27:02.096467 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5ef5b1fe-9e55-4310-b49b-75334cac9bb7","Type":"ContainerStarted","Data":"2b396139b3ab54668f220ada16a5b77714915b0727a2f9b6278e943319aa416d"} Feb 17 16:27:02 crc kubenswrapper[4874]: I0217 16:27:02.127041 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.127023959 podStartE2EDuration="2.127023959s" podCreationTimestamp="2026-02-17 16:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:02.121629337 +0000 UTC m=+1432.416017898" watchObservedRunningTime="2026-02-17 16:27:02.127023959 +0000 UTC m=+1432.421412520" Feb 17 16:27:03 crc kubenswrapper[4874]: E0217 16:27:03.788105 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4327f121_2ddc_4367_9055_17c7fe4d855e.slice/crio-4c01c80ff75f545c7ad24c26f26d427f62b8c88c7db6a4f7544ae7b749530ed3\": RecentStats: unable to find data in memory cache]" Feb 17 16:27:04 crc kubenswrapper[4874]: I0217 16:27:04.291551 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 16:27:05 crc kubenswrapper[4874]: I0217 16:27:05.507004 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:27:06 crc kubenswrapper[4874]: I0217 16:27:06.428931 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:27:06 crc kubenswrapper[4874]: I0217 16:27:06.429471 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:27:07 crc kubenswrapper[4874]: I0217 16:27:07.441277 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:27:07 crc kubenswrapper[4874]: I0217 16:27:07.441267 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:27:08 crc kubenswrapper[4874]: I0217 16:27:08.424755 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:27:08 crc kubenswrapper[4874]: I0217 16:27:08.425036 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:27:09 crc kubenswrapper[4874]: I0217 16:27:09.188943 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnmbf" event={"ID":"e293c523-929f-4d2e-bf96-091cbed7f12b","Type":"ContainerStarted","Data":"ecd3d808bcf9c54fbf8c3b38c1e22eae51f02e04d27ad9b143fc9770921f5ed8"} Feb 17 16:27:09 crc kubenswrapper[4874]: I0217 16:27:09.224513 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-dnmbf" podStartSLOduration=2.498280272 podStartE2EDuration="10.224485805s" podCreationTimestamp="2026-02-17 16:26:59 +0000 UTC" firstStartedPulling="2026-02-17 16:27:00.450053809 +0000 UTC m=+1430.744442370" lastFinishedPulling="2026-02-17 16:27:08.176259342 +0000 UTC m=+1438.470647903" observedRunningTime="2026-02-17 16:27:09.204280582 +0000 UTC m=+1439.498669143" watchObservedRunningTime="2026-02-17 16:27:09.224485805 +0000 UTC m=+1439.518874396" Feb 17 16:27:09 crc kubenswrapper[4874]: I0217 16:27:09.507246 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:27:09 crc kubenswrapper[4874]: I0217 16:27:09.507608 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:27:10 crc kubenswrapper[4874]: I0217 16:27:10.510471 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:27:10 crc kubenswrapper[4874]: I0217 16:27:10.554373 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:27:11 crc kubenswrapper[4874]: I0217 16:27:11.246187 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:27:12 crc kubenswrapper[4874]: I0217 16:27:12.224975 4874 generic.go:334] "Generic (PLEG): container finished" podID="e293c523-929f-4d2e-bf96-091cbed7f12b" containerID="ecd3d808bcf9c54fbf8c3b38c1e22eae51f02e04d27ad9b143fc9770921f5ed8" exitCode=0 Feb 17 16:27:12 crc kubenswrapper[4874]: I0217 16:27:12.225050 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnmbf" event={"ID":"e293c523-929f-4d2e-bf96-091cbed7f12b","Type":"ContainerDied","Data":"ecd3d808bcf9c54fbf8c3b38c1e22eae51f02e04d27ad9b143fc9770921f5ed8"} Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.672907 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.718796 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgpbx\" (UniqueName: \"kubernetes.io/projected/e293c523-929f-4d2e-bf96-091cbed7f12b-kube-api-access-pgpbx\") pod \"e293c523-929f-4d2e-bf96-091cbed7f12b\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.719226 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-combined-ca-bundle\") pod \"e293c523-929f-4d2e-bf96-091cbed7f12b\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.719253 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-config-data\") pod \"e293c523-929f-4d2e-bf96-091cbed7f12b\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.719499 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-scripts\") pod \"e293c523-929f-4d2e-bf96-091cbed7f12b\" (UID: \"e293c523-929f-4d2e-bf96-091cbed7f12b\") " Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.734360 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-scripts" (OuterVolumeSpecName: "scripts") pod "e293c523-929f-4d2e-bf96-091cbed7f12b" (UID: "e293c523-929f-4d2e-bf96-091cbed7f12b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.742389 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e293c523-929f-4d2e-bf96-091cbed7f12b-kube-api-access-pgpbx" (OuterVolumeSpecName: "kube-api-access-pgpbx") pod "e293c523-929f-4d2e-bf96-091cbed7f12b" (UID: "e293c523-929f-4d2e-bf96-091cbed7f12b"). InnerVolumeSpecName "kube-api-access-pgpbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.762680 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-config-data" (OuterVolumeSpecName: "config-data") pod "e293c523-929f-4d2e-bf96-091cbed7f12b" (UID: "e293c523-929f-4d2e-bf96-091cbed7f12b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.764418 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e293c523-929f-4d2e-bf96-091cbed7f12b" (UID: "e293c523-929f-4d2e-bf96-091cbed7f12b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.823901 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.823937 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgpbx\" (UniqueName: \"kubernetes.io/projected/e293c523-929f-4d2e-bf96-091cbed7f12b-kube-api-access-pgpbx\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.823950 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:13 crc kubenswrapper[4874]: I0217 16:27:13.823959 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e293c523-929f-4d2e-bf96-091cbed7f12b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:14 crc kubenswrapper[4874]: I0217 16:27:14.251711 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnmbf" event={"ID":"e293c523-929f-4d2e-bf96-091cbed7f12b","Type":"ContainerDied","Data":"3caf88d289e57a7e893cdaf8e791351a4b57d94b63afd425e596b646d832761c"} Feb 17 16:27:14 crc kubenswrapper[4874]: I0217 16:27:14.251775 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnmbf" Feb 17 16:27:14 crc kubenswrapper[4874]: I0217 16:27:14.251880 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3caf88d289e57a7e893cdaf8e791351a4b57d94b63afd425e596b646d832761c" Feb 17 16:27:16 crc kubenswrapper[4874]: I0217 16:27:16.434673 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:27:16 crc kubenswrapper[4874]: I0217 16:27:16.440504 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:27:16 crc kubenswrapper[4874]: I0217 16:27:16.446458 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.004912 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.087028 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.219529 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-config-data\") pod \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.219756 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-combined-ca-bundle\") pod \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.219905 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx64c\" (UniqueName: \"kubernetes.io/projected/d3ed2c18-8df0-435d-a3b1-056be5a94c20-kube-api-access-wx64c\") pod \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\" (UID: \"d3ed2c18-8df0-435d-a3b1-056be5a94c20\") " Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.231239 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ed2c18-8df0-435d-a3b1-056be5a94c20-kube-api-access-wx64c" (OuterVolumeSpecName: "kube-api-access-wx64c") pod "d3ed2c18-8df0-435d-a3b1-056be5a94c20" (UID: "d3ed2c18-8df0-435d-a3b1-056be5a94c20"). InnerVolumeSpecName "kube-api-access-wx64c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.259424 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-config-data" (OuterVolumeSpecName: "config-data") pod "d3ed2c18-8df0-435d-a3b1-056be5a94c20" (UID: "d3ed2c18-8df0-435d-a3b1-056be5a94c20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.262864 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ed2c18-8df0-435d-a3b1-056be5a94c20" (UID: "d3ed2c18-8df0-435d-a3b1-056be5a94c20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.283506 4874 generic.go:334] "Generic (PLEG): container finished" podID="d3ed2c18-8df0-435d-a3b1-056be5a94c20" containerID="3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa" exitCode=137 Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.284358 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.285307 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3ed2c18-8df0-435d-a3b1-056be5a94c20","Type":"ContainerDied","Data":"3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa"} Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.285353 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3ed2c18-8df0-435d-a3b1-056be5a94c20","Type":"ContainerDied","Data":"ce600c6c2334249eacb239c1004b41ca4fb4534da85a1d84c487aa256419ae68"} Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.285373 4874 scope.go:117] "RemoveContainer" containerID="3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.300245 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.322700 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.322731 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx64c\" (UniqueName: \"kubernetes.io/projected/d3ed2c18-8df0-435d-a3b1-056be5a94c20-kube-api-access-wx64c\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.322741 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ed2c18-8df0-435d-a3b1-056be5a94c20-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.392564 4874 scope.go:117] "RemoveContainer" containerID="3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa" Feb 17 16:27:17 crc kubenswrapper[4874]: E0217 16:27:17.395301 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa\": container with ID starting with 3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa not found: ID does not exist" containerID="3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.395351 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa"} err="failed to get container status \"3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa\": rpc error: code = NotFound desc = could not find container \"3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa\": container with ID starting with 3d95d14f03cef6025f06a50158f8e5f6a5d37b81f1d31e493617f85715d3a3fa not found: ID does not exist" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.415654 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.432055 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.447369 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:27:17 crc kubenswrapper[4874]: E0217 16:27:17.447986 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ed2c18-8df0-435d-a3b1-056be5a94c20" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.448007 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ed2c18-8df0-435d-a3b1-056be5a94c20" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:27:17 crc kubenswrapper[4874]: E0217 16:27:17.448051 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e293c523-929f-4d2e-bf96-091cbed7f12b" containerName="aodh-db-sync" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.448059 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e293c523-929f-4d2e-bf96-091cbed7f12b" containerName="aodh-db-sync" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.448325 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e293c523-929f-4d2e-bf96-091cbed7f12b" containerName="aodh-db-sync" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.448365 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ed2c18-8df0-435d-a3b1-056be5a94c20" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.449348 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.452664 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.452923 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.453089 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.466645 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.528382 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwpj\" (UniqueName: \"kubernetes.io/projected/c485f7e2-b876-413e-99c2-f67cd5ecd092-kube-api-access-spwpj\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.528864 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.529131 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.529297 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.530360 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.633380 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.633535 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.633651 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spwpj\" (UniqueName: \"kubernetes.io/projected/c485f7e2-b876-413e-99c2-f67cd5ecd092-kube-api-access-spwpj\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.633761 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.633827 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.637380 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.638553 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.638607 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.639010 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c485f7e2-b876-413e-99c2-f67cd5ecd092-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.649807 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spwpj\" (UniqueName: \"kubernetes.io/projected/c485f7e2-b876-413e-99c2-f67cd5ecd092-kube-api-access-spwpj\") pod \"nova-cell1-novncproxy-0\" (UID: \"c485f7e2-b876-413e-99c2-f67cd5ecd092\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:17 crc kubenswrapper[4874]: I0217 16:27:17.770467 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:18 crc kubenswrapper[4874]: W0217 16:27:18.258289 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc485f7e2_b876_413e_99c2_f67cd5ecd092.slice/crio-310256946d0312a90ef597533b1cb86ee195dd0560bd035d309320092f71614e WatchSource:0}: Error finding container 310256946d0312a90ef597533b1cb86ee195dd0560bd035d309320092f71614e: Status 404 returned error can't find the container with id 310256946d0312a90ef597533b1cb86ee195dd0560bd035d309320092f71614e Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.261027 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.301970 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c485f7e2-b876-413e-99c2-f67cd5ecd092","Type":"ContainerStarted","Data":"310256946d0312a90ef597533b1cb86ee195dd0560bd035d309320092f71614e"} Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.428366 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.428761 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.428992 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.445799 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.479436 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ed2c18-8df0-435d-a3b1-056be5a94c20" path="/var/lib/kubelet/pods/d3ed2c18-8df0-435d-a3b1-056be5a94c20/volumes" Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.989380 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.995416 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.997663 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-lsrl9" Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.997782 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 16:27:18 crc kubenswrapper[4874]: I0217 16:27:18.998967 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.020377 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.072666 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-scripts\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.073009 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.073120 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-config-data\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.073203 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw5h6\" (UniqueName: \"kubernetes.io/projected/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-kube-api-access-pw5h6\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.176970 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-scripts\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.177098 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.177135 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-config-data\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.177164 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw5h6\" (UniqueName: \"kubernetes.io/projected/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-kube-api-access-pw5h6\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.186682 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.195089 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-scripts\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.197541 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw5h6\" (UniqueName: \"kubernetes.io/projected/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-kube-api-access-pw5h6\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.197692 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-config-data\") pod \"aodh-0\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.324727 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c485f7e2-b876-413e-99c2-f67cd5ecd092","Type":"ContainerStarted","Data":"af2d18e71a42ac300bedca2412cefc152dea194439ed13b8ce31be49cd016051"} Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.325395 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.328840 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.332740 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.377152 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.377128354 podStartE2EDuration="2.377128354s" podCreationTimestamp="2026-02-17 16:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:19.347338297 +0000 UTC m=+1449.641726868" watchObservedRunningTime="2026-02-17 16:27:19.377128354 +0000 UTC m=+1449.671516925" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.715025 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-srfbf"] Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.756661 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.804914 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-srfbf"] Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.912029 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.912114 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-config\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.912257 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.912397 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.912426 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckdst\" (UniqueName: \"kubernetes.io/projected/440002d4-28a6-4e11-b188-1921f660e282-kube-api-access-ckdst\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:19 crc kubenswrapper[4874]: I0217 16:27:19.912459 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.014119 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.014178 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckdst\" (UniqueName: \"kubernetes.io/projected/440002d4-28a6-4e11-b188-1921f660e282-kube-api-access-ckdst\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.014212 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.014266 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.014298 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-config\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.014387 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.015032 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.015341 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.015444 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-config\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.015542 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.018589 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.041250 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckdst\" (UniqueName: \"kubernetes.io/projected/440002d4-28a6-4e11-b188-1921f660e282-kube-api-access-ckdst\") pod \"dnsmasq-dns-f84f9ccf-srfbf\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.118582 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.161790 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:27:20 crc kubenswrapper[4874]: W0217 16:27:20.171776 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53a7ab1d_25fe_4d79_9778_fe644b1e97b8.slice/crio-5b3e3aa5bdf56652f19c92feafaeb2f082b6aafcb565058e2e73119cd206b658 WatchSource:0}: Error finding container 5b3e3aa5bdf56652f19c92feafaeb2f082b6aafcb565058e2e73119cd206b658: Status 404 returned error can't find the container with id 5b3e3aa5bdf56652f19c92feafaeb2f082b6aafcb565058e2e73119cd206b658 Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.357181 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerStarted","Data":"5b3e3aa5bdf56652f19c92feafaeb2f082b6aafcb565058e2e73119cd206b658"} Feb 17 16:27:20 crc kubenswrapper[4874]: I0217 16:27:20.648577 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-srfbf"] Feb 17 16:27:21 crc kubenswrapper[4874]: I0217 16:27:21.368307 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerStarted","Data":"93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7"} Feb 17 16:27:21 crc kubenswrapper[4874]: I0217 16:27:21.370587 4874 generic.go:334] "Generic (PLEG): container finished" podID="440002d4-28a6-4e11-b188-1921f660e282" containerID="03e91dd6c266c94fcd08974d37801ad2931dd121d721ad0f8a3ff60bc09cc5f8" exitCode=0 Feb 17 16:27:21 crc kubenswrapper[4874]: I0217 16:27:21.370684 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" event={"ID":"440002d4-28a6-4e11-b188-1921f660e282","Type":"ContainerDied","Data":"03e91dd6c266c94fcd08974d37801ad2931dd121d721ad0f8a3ff60bc09cc5f8"} Feb 17 16:27:21 crc kubenswrapper[4874]: I0217 16:27:21.370713 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" event={"ID":"440002d4-28a6-4e11-b188-1921f660e282","Type":"ContainerStarted","Data":"b2edc27d55280e97f3f2ccce93e4e83990d358fd63bf5a8ab12f85aedc36a92f"} Feb 17 16:27:21 crc kubenswrapper[4874]: I0217 16:27:21.930430 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:21 crc kubenswrapper[4874]: I0217 16:27:21.931121 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="proxy-httpd" containerID="cri-o://15d4bf913c291fae0241e3c755ed5ee968b6384a99744ea93e2c98f96bde90b9" gracePeriod=30 Feb 17 16:27:21 crc kubenswrapper[4874]: I0217 16:27:21.931157 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="sg-core" containerID="cri-o://b534344785ae19bbd891b22b9563f656ef776e0e9f0a31871569a57cad6dd275" gracePeriod=30 Feb 17 16:27:21 crc kubenswrapper[4874]: I0217 16:27:21.931318 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="ceilometer-notification-agent" containerID="cri-o://8c98912d53881a96c88f3c2602c1529a6f3d9332e520ce8822352b79a403929b" gracePeriod=30 Feb 17 16:27:21 crc kubenswrapper[4874]: I0217 16:27:21.931532 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="ceilometer-central-agent" containerID="cri-o://f4b43c49a73976e75dee12a7473d76f28c3e6250fbda3df929483354029c5ffc" gracePeriod=30 Feb 17 16:27:22 crc kubenswrapper[4874]: I0217 16:27:22.386223 4874 generic.go:334] "Generic (PLEG): container finished" podID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerID="15d4bf913c291fae0241e3c755ed5ee968b6384a99744ea93e2c98f96bde90b9" exitCode=0 Feb 17 16:27:22 crc kubenswrapper[4874]: I0217 16:27:22.386256 4874 generic.go:334] "Generic (PLEG): container finished" podID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerID="b534344785ae19bbd891b22b9563f656ef776e0e9f0a31871569a57cad6dd275" exitCode=2 Feb 17 16:27:22 crc kubenswrapper[4874]: I0217 16:27:22.386309 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerDied","Data":"15d4bf913c291fae0241e3c755ed5ee968b6384a99744ea93e2c98f96bde90b9"} Feb 17 16:27:22 crc kubenswrapper[4874]: I0217 16:27:22.386367 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerDied","Data":"b534344785ae19bbd891b22b9563f656ef776e0e9f0a31871569a57cad6dd275"} Feb 17 16:27:22 crc kubenswrapper[4874]: I0217 16:27:22.389534 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" event={"ID":"440002d4-28a6-4e11-b188-1921f660e282","Type":"ContainerStarted","Data":"c99e5edddda76210735031ccc2041266f5be1c827131f843824f39b0a51791ad"} Feb 17 16:27:22 crc kubenswrapper[4874]: I0217 16:27:22.389693 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:22 crc kubenswrapper[4874]: I0217 16:27:22.426044 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" podStartSLOduration=3.426022393 podStartE2EDuration="3.426022393s" podCreationTimestamp="2026-02-17 16:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:22.412809971 +0000 UTC m=+1452.707198562" watchObservedRunningTime="2026-02-17 16:27:22.426022393 +0000 UTC m=+1452.720410954" Feb 17 16:27:22 crc kubenswrapper[4874]: I0217 16:27:22.770601 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:23 crc kubenswrapper[4874]: I0217 16:27:23.289050 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:23 crc kubenswrapper[4874]: I0217 16:27:23.289296 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-log" containerID="cri-o://98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9" gracePeriod=30 Feb 17 16:27:23 crc kubenswrapper[4874]: I0217 16:27:23.289419 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-api" containerID="cri-o://bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd" gracePeriod=30 Feb 17 16:27:23 crc kubenswrapper[4874]: I0217 16:27:23.402474 4874 generic.go:334] "Generic (PLEG): container finished" podID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerID="f4b43c49a73976e75dee12a7473d76f28c3e6250fbda3df929483354029c5ffc" exitCode=0 Feb 17 16:27:23 crc kubenswrapper[4874]: I0217 16:27:23.402693 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerDied","Data":"f4b43c49a73976e75dee12a7473d76f28c3e6250fbda3df929483354029c5ffc"} Feb 17 16:27:24 crc kubenswrapper[4874]: I0217 16:27:24.415281 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerStarted","Data":"5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3"} Feb 17 16:27:24 crc kubenswrapper[4874]: I0217 16:27:24.417595 4874 generic.go:334] "Generic (PLEG): container finished" podID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerID="98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9" exitCode=143 Feb 17 16:27:24 crc kubenswrapper[4874]: I0217 16:27:24.417638 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d03bdb33-2317-487a-9566-10fbe37a9bc4","Type":"ContainerDied","Data":"98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9"} Feb 17 16:27:26 crc kubenswrapper[4874]: I0217 16:27:26.596180 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.048061 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.111694 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gsgx\" (UniqueName: \"kubernetes.io/projected/d03bdb33-2317-487a-9566-10fbe37a9bc4-kube-api-access-8gsgx\") pod \"d03bdb33-2317-487a-9566-10fbe37a9bc4\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.111949 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-config-data\") pod \"d03bdb33-2317-487a-9566-10fbe37a9bc4\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.112005 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03bdb33-2317-487a-9566-10fbe37a9bc4-logs\") pod \"d03bdb33-2317-487a-9566-10fbe37a9bc4\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.112129 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-combined-ca-bundle\") pod \"d03bdb33-2317-487a-9566-10fbe37a9bc4\" (UID: \"d03bdb33-2317-487a-9566-10fbe37a9bc4\") " Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.113034 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d03bdb33-2317-487a-9566-10fbe37a9bc4-logs" (OuterVolumeSpecName: "logs") pod "d03bdb33-2317-487a-9566-10fbe37a9bc4" (UID: "d03bdb33-2317-487a-9566-10fbe37a9bc4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.118471 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d03bdb33-2317-487a-9566-10fbe37a9bc4-kube-api-access-8gsgx" (OuterVolumeSpecName: "kube-api-access-8gsgx") pod "d03bdb33-2317-487a-9566-10fbe37a9bc4" (UID: "d03bdb33-2317-487a-9566-10fbe37a9bc4"). InnerVolumeSpecName "kube-api-access-8gsgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.157954 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-config-data" (OuterVolumeSpecName: "config-data") pod "d03bdb33-2317-487a-9566-10fbe37a9bc4" (UID: "d03bdb33-2317-487a-9566-10fbe37a9bc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.167290 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d03bdb33-2317-487a-9566-10fbe37a9bc4" (UID: "d03bdb33-2317-487a-9566-10fbe37a9bc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.215009 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.215043 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03bdb33-2317-487a-9566-10fbe37a9bc4-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.215052 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03bdb33-2317-487a-9566-10fbe37a9bc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.215063 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gsgx\" (UniqueName: \"kubernetes.io/projected/d03bdb33-2317-487a-9566-10fbe37a9bc4-kube-api-access-8gsgx\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.500788 4874 generic.go:334] "Generic (PLEG): container finished" podID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerID="8c98912d53881a96c88f3c2602c1529a6f3d9332e520ce8822352b79a403929b" exitCode=0 Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.500850 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerDied","Data":"8c98912d53881a96c88f3c2602c1529a6f3d9332e520ce8822352b79a403929b"} Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.535945 4874 generic.go:334] "Generic (PLEG): container finished" podID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerID="bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd" exitCode=0 Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.536037 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d03bdb33-2317-487a-9566-10fbe37a9bc4","Type":"ContainerDied","Data":"bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd"} Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.536062 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d03bdb33-2317-487a-9566-10fbe37a9bc4","Type":"ContainerDied","Data":"7378673f575b5ee86663fbb342a54684e8ab9ecc15e6a970cf5be467a12beaf2"} Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.536091 4874 scope.go:117] "RemoveContainer" containerID="bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.536232 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.564353 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerStarted","Data":"38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036"} Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.613745 4874 scope.go:117] "RemoveContainer" containerID="98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.643480 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.670904 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.681888 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:27 crc kubenswrapper[4874]: E0217 16:27:27.682363 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-log" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.682376 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-log" Feb 17 16:27:27 crc kubenswrapper[4874]: E0217 16:27:27.682405 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-api" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.682412 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-api" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.682624 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-api" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.682652 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" containerName="nova-api-log" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.682769 4874 scope.go:117] "RemoveContainer" containerID="bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd" Feb 17 16:27:27 crc kubenswrapper[4874]: E0217 16:27:27.683501 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd\": container with ID starting with bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd not found: ID does not exist" containerID="bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.683541 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd"} err="failed to get container status \"bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd\": rpc error: code = NotFound desc = could not find container \"bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd\": container with ID starting with bd2b4dd4f32471f9cdaba7499b43f01f6c500363e20d2c96878e1563bd648fdd not found: ID does not exist" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.683569 4874 scope.go:117] "RemoveContainer" containerID="98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.683825 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: E0217 16:27:27.683972 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9\": container with ID starting with 98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9 not found: ID does not exist" containerID="98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.683995 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9"} err="failed to get container status \"98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9\": rpc error: code = NotFound desc = could not find container \"98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9\": container with ID starting with 98989d52989a5d2f6341980d79d28598044624efbe49528fed51c7ca96d1b9c9 not found: ID does not exist" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.688051 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.688249 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.688422 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.693612 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.724809 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.724848 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.724890 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.732921 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b677238bae66091e799ac761dff49995ef4eec3d7982bfb5c634aa828596a1e"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.732997 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://1b677238bae66091e799ac761dff49995ef4eec3d7982bfb5c634aa828596a1e" gracePeriod=600 Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.771504 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.814139 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.842365 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbs7g\" (UniqueName: \"kubernetes.io/projected/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-kube-api-access-fbs7g\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.842593 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-public-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.842697 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.842849 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-config-data\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.846131 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.847504 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-logs\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.952154 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.952223 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-config-data\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.952260 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.952282 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-logs\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.952436 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbs7g\" (UniqueName: \"kubernetes.io/projected/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-kube-api-access-fbs7g\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.952499 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-public-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.955948 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-logs\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.959357 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-config-data\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.961682 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.961948 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-public-tls-certs\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.964055 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:27 crc kubenswrapper[4874]: I0217 16:27:27.987699 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbs7g\" (UniqueName: \"kubernetes.io/projected/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-kube-api-access-fbs7g\") pod \"nova-api-0\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " pod="openstack/nova-api-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.037064 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.092902 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.259590 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-log-httpd\") pod \"fff55ae8-1688-4f99-859d-3497b3cf851f\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.260025 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzdqd\" (UniqueName: \"kubernetes.io/projected/fff55ae8-1688-4f99-859d-3497b3cf851f-kube-api-access-kzdqd\") pod \"fff55ae8-1688-4f99-859d-3497b3cf851f\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.260067 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-scripts\") pod \"fff55ae8-1688-4f99-859d-3497b3cf851f\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.260253 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-sg-core-conf-yaml\") pod \"fff55ae8-1688-4f99-859d-3497b3cf851f\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.260270 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-config-data\") pod \"fff55ae8-1688-4f99-859d-3497b3cf851f\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.260281 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fff55ae8-1688-4f99-859d-3497b3cf851f" (UID: "fff55ae8-1688-4f99-859d-3497b3cf851f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.260295 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-combined-ca-bundle\") pod \"fff55ae8-1688-4f99-859d-3497b3cf851f\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.260360 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-run-httpd\") pod \"fff55ae8-1688-4f99-859d-3497b3cf851f\" (UID: \"fff55ae8-1688-4f99-859d-3497b3cf851f\") " Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.261164 4874 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.261376 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fff55ae8-1688-4f99-859d-3497b3cf851f" (UID: "fff55ae8-1688-4f99-859d-3497b3cf851f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.265868 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff55ae8-1688-4f99-859d-3497b3cf851f-kube-api-access-kzdqd" (OuterVolumeSpecName: "kube-api-access-kzdqd") pod "fff55ae8-1688-4f99-859d-3497b3cf851f" (UID: "fff55ae8-1688-4f99-859d-3497b3cf851f"). InnerVolumeSpecName "kube-api-access-kzdqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.271461 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-scripts" (OuterVolumeSpecName: "scripts") pod "fff55ae8-1688-4f99-859d-3497b3cf851f" (UID: "fff55ae8-1688-4f99-859d-3497b3cf851f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.301223 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fff55ae8-1688-4f99-859d-3497b3cf851f" (UID: "fff55ae8-1688-4f99-859d-3497b3cf851f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.363687 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzdqd\" (UniqueName: \"kubernetes.io/projected/fff55ae8-1688-4f99-859d-3497b3cf851f-kube-api-access-kzdqd\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.364010 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.364023 4874 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.364034 4874 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fff55ae8-1688-4f99-859d-3497b3cf851f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.415214 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fff55ae8-1688-4f99-859d-3497b3cf851f" (UID: "fff55ae8-1688-4f99-859d-3497b3cf851f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.466122 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.466233 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-config-data" (OuterVolumeSpecName: "config-data") pod "fff55ae8-1688-4f99-859d-3497b3cf851f" (UID: "fff55ae8-1688-4f99-859d-3497b3cf851f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.504630 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d03bdb33-2317-487a-9566-10fbe37a9bc4" path="/var/lib/kubelet/pods/d03bdb33-2317-487a-9566-10fbe37a9bc4/volumes" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.570227 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff55ae8-1688-4f99-859d-3497b3cf851f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.587259 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fff55ae8-1688-4f99-859d-3497b3cf851f","Type":"ContainerDied","Data":"197b719d94eb90e86def2d816e1f77b6afd6d3e15f8ba0ada39341fcd565e03e"} Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.587309 4874 scope.go:117] "RemoveContainer" containerID="15d4bf913c291fae0241e3c755ed5ee968b6384a99744ea93e2c98f96bde90b9" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.587450 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.611606 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="1b677238bae66091e799ac761dff49995ef4eec3d7982bfb5c634aa828596a1e" exitCode=0 Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.611891 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"1b677238bae66091e799ac761dff49995ef4eec3d7982bfb5c634aa828596a1e"} Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.611972 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e"} Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.632560 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.650877 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.650943 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.664770 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.712461 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:28 crc kubenswrapper[4874]: E0217 16:27:28.713146 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="ceilometer-notification-agent" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.713169 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="ceilometer-notification-agent" Feb 17 16:27:28 crc kubenswrapper[4874]: E0217 16:27:28.713180 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="ceilometer-central-agent" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.713189 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="ceilometer-central-agent" Feb 17 16:27:28 crc kubenswrapper[4874]: E0217 16:27:28.713210 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="proxy-httpd" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.713221 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="proxy-httpd" Feb 17 16:27:28 crc kubenswrapper[4874]: E0217 16:27:28.713239 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="sg-core" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.713248 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="sg-core" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.713514 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="proxy-httpd" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.713535 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="ceilometer-central-agent" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.713550 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="ceilometer-notification-agent" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.713564 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" containerName="sg-core" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.716919 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.720518 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.725343 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.725691 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.870846 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8872n"] Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.873991 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.879291 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.879360 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.887058 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-scripts\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.887229 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.887343 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.887387 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-run-httpd\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.887437 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-config-data\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.887536 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rt4q\" (UniqueName: \"kubernetes.io/projected/e96e4768-c5ef-4719-aabf-25094ae858c1-kube-api-access-2rt4q\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.887754 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-log-httpd\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.887960 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8872n"] Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.989611 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-config-data\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.989675 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.989724 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.989804 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-run-httpd\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.989871 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-config-data\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.990011 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rt4q\" (UniqueName: \"kubernetes.io/projected/e96e4768-c5ef-4719-aabf-25094ae858c1-kube-api-access-2rt4q\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.990196 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-log-httpd\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.990232 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9966\" (UniqueName: \"kubernetes.io/projected/677d7b63-59f1-4829-9478-f59253741cbc-kube-api-access-b9966\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.990371 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-scripts\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.990412 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-run-httpd\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.990436 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.990487 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-scripts\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.990767 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-log-httpd\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.995193 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-scripts\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.995757 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:28 crc kubenswrapper[4874]: I0217 16:27:28.996373 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-config-data\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.004832 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.011999 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rt4q\" (UniqueName: \"kubernetes.io/projected/e96e4768-c5ef-4719-aabf-25094ae858c1-kube-api-access-2rt4q\") pod \"ceilometer-0\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " pod="openstack/ceilometer-0" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.047558 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.093105 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9966\" (UniqueName: \"kubernetes.io/projected/677d7b63-59f1-4829-9478-f59253741cbc-kube-api-access-b9966\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.093453 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-scripts\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.093486 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.093524 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-config-data\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.097599 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-config-data\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.097773 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.100176 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-scripts\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.114753 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9966\" (UniqueName: \"kubernetes.io/projected/677d7b63-59f1-4829-9478-f59253741cbc-kube-api-access-b9966\") pod \"nova-cell1-cell-mapping-8872n\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:29 crc kubenswrapper[4874]: W0217 16:27:29.210191 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d7ecfba_048e_476b_9cc9_dd1eda535ab1.slice/crio-aed5a4b43f37fd6bbeb5cff8e9c1df7e164f06038d60a68dde11d5aeeaf71fc6 WatchSource:0}: Error finding container aed5a4b43f37fd6bbeb5cff8e9c1df7e164f06038d60a68dde11d5aeeaf71fc6: Status 404 returned error can't find the container with id aed5a4b43f37fd6bbeb5cff8e9c1df7e164f06038d60a68dde11d5aeeaf71fc6 Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.210813 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.222250 4874 scope.go:117] "RemoveContainer" containerID="b534344785ae19bbd891b22b9563f656ef776e0e9f0a31871569a57cad6dd275" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.318264 4874 scope.go:117] "RemoveContainer" containerID="8c98912d53881a96c88f3c2602c1529a6f3d9332e520ce8822352b79a403929b" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.553016 4874 scope.go:117] "RemoveContainer" containerID="f4b43c49a73976e75dee12a7473d76f28c3e6250fbda3df929483354029c5ffc" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.637234 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ecfba-048e-476b-9cc9-dd1eda535ab1","Type":"ContainerStarted","Data":"aed5a4b43f37fd6bbeb5cff8e9c1df7e164f06038d60a68dde11d5aeeaf71fc6"} Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.687006 4874 scope.go:117] "RemoveContainer" containerID="e095e9b56aac8ea173cabbfa2b9b7d5f89bdf527eea23b78b0ba4ca194b5eb6c" Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.857442 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:29 crc kubenswrapper[4874]: I0217 16:27:29.871954 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8872n"] Feb 17 16:27:29 crc kubenswrapper[4874]: W0217 16:27:29.873020 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod677d7b63_59f1_4829_9478_f59253741cbc.slice/crio-01881f7e981ab251b5ec122cfae70c668593f6be992ff9a1b078333faf81aae0 WatchSource:0}: Error finding container 01881f7e981ab251b5ec122cfae70c668593f6be992ff9a1b078333faf81aae0: Status 404 returned error can't find the container with id 01881f7e981ab251b5ec122cfae70c668593f6be992ff9a1b078333faf81aae0 Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.121224 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.222252 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pd7kk"] Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.222704 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" podUID="f3e465d4-50df-419e-b724-3e6b957613e5" containerName="dnsmasq-dns" containerID="cri-o://073a06bf9d5eee431c2516dc49a8fcde6070a48fe43b4707401407d6c95cd9cf" gracePeriod=10 Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.481684 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff55ae8-1688-4f99-859d-3497b3cf851f" path="/var/lib/kubelet/pods/fff55ae8-1688-4f99-859d-3497b3cf851f/volumes" Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.661764 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8872n" event={"ID":"677d7b63-59f1-4829-9478-f59253741cbc","Type":"ContainerStarted","Data":"500bcb02302837a39c1f56bacbc15e09e11785af7b0b611384cca00f2bc6ea82"} Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.661818 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8872n" event={"ID":"677d7b63-59f1-4829-9478-f59253741cbc","Type":"ContainerStarted","Data":"01881f7e981ab251b5ec122cfae70c668593f6be992ff9a1b078333faf81aae0"} Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.667615 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerStarted","Data":"c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f"} Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.667749 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-api" containerID="cri-o://93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7" gracePeriod=30 Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.667790 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-listener" containerID="cri-o://c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f" gracePeriod=30 Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.667901 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-evaluator" containerID="cri-o://5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3" gracePeriod=30 Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.667764 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-notifier" containerID="cri-o://38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036" gracePeriod=30 Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.683031 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8872n" podStartSLOduration=2.6830129510000003 podStartE2EDuration="2.683012951s" podCreationTimestamp="2026-02-17 16:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:30.679234769 +0000 UTC m=+1460.973623320" watchObservedRunningTime="2026-02-17 16:27:30.683012951 +0000 UTC m=+1460.977401512" Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.701559 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerStarted","Data":"29cb59c9af13c4b0078d664df658b9e458b35b6e66f3f22586f5de79f45e79ac"} Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.713702 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.199907935 podStartE2EDuration="12.71368318s" podCreationTimestamp="2026-02-17 16:27:18 +0000 UTC" firstStartedPulling="2026-02-17 16:27:20.174030528 +0000 UTC m=+1450.468419089" lastFinishedPulling="2026-02-17 16:27:29.687805773 +0000 UTC m=+1459.982194334" observedRunningTime="2026-02-17 16:27:30.70426067 +0000 UTC m=+1460.998649241" watchObservedRunningTime="2026-02-17 16:27:30.71368318 +0000 UTC m=+1461.008071741" Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.722117 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ecfba-048e-476b-9cc9-dd1eda535ab1","Type":"ContainerStarted","Data":"fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa"} Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.757862 4874 generic.go:334] "Generic (PLEG): container finished" podID="f3e465d4-50df-419e-b724-3e6b957613e5" containerID="073a06bf9d5eee431c2516dc49a8fcde6070a48fe43b4707401407d6c95cd9cf" exitCode=0 Feb 17 16:27:30 crc kubenswrapper[4874]: I0217 16:27:30.757904 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" event={"ID":"f3e465d4-50df-419e-b724-3e6b957613e5","Type":"ContainerDied","Data":"073a06bf9d5eee431c2516dc49a8fcde6070a48fe43b4707401407d6c95cd9cf"} Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.073049 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.151256 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-svc\") pod \"f3e465d4-50df-419e-b724-3e6b957613e5\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.151520 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-swift-storage-0\") pod \"f3e465d4-50df-419e-b724-3e6b957613e5\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.151671 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-nb\") pod \"f3e465d4-50df-419e-b724-3e6b957613e5\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.151718 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-config\") pod \"f3e465d4-50df-419e-b724-3e6b957613e5\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.151767 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gcnw\" (UniqueName: \"kubernetes.io/projected/f3e465d4-50df-419e-b724-3e6b957613e5-kube-api-access-8gcnw\") pod \"f3e465d4-50df-419e-b724-3e6b957613e5\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.151953 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-sb\") pod \"f3e465d4-50df-419e-b724-3e6b957613e5\" (UID: \"f3e465d4-50df-419e-b724-3e6b957613e5\") " Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.185641 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e465d4-50df-419e-b724-3e6b957613e5-kube-api-access-8gcnw" (OuterVolumeSpecName: "kube-api-access-8gcnw") pod "f3e465d4-50df-419e-b724-3e6b957613e5" (UID: "f3e465d4-50df-419e-b724-3e6b957613e5"). InnerVolumeSpecName "kube-api-access-8gcnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.257616 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gcnw\" (UniqueName: \"kubernetes.io/projected/f3e465d4-50df-419e-b724-3e6b957613e5-kube-api-access-8gcnw\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.309067 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.406551 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3e465d4-50df-419e-b724-3e6b957613e5" (UID: "f3e465d4-50df-419e-b724-3e6b957613e5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.408512 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3e465d4-50df-419e-b724-3e6b957613e5" (UID: "f3e465d4-50df-419e-b724-3e6b957613e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.409015 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3e465d4-50df-419e-b724-3e6b957613e5" (UID: "f3e465d4-50df-419e-b724-3e6b957613e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.411449 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-config" (OuterVolumeSpecName: "config") pod "f3e465d4-50df-419e-b724-3e6b957613e5" (UID: "f3e465d4-50df-419e-b724-3e6b957613e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.416857 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3e465d4-50df-419e-b724-3e6b957613e5" (UID: "f3e465d4-50df-419e-b724-3e6b957613e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.476248 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.477029 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.477049 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.477089 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.477103 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e465d4-50df-419e-b724-3e6b957613e5-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.775647 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ecfba-048e-476b-9cc9-dd1eda535ab1","Type":"ContainerStarted","Data":"a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4"} Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.786229 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" event={"ID":"f3e465d4-50df-419e-b724-3e6b957613e5","Type":"ContainerDied","Data":"01dd8cdac32bc87cfc23d3f814fa51e027d542d9691aeeceb30a83bc556cd509"} Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.786311 4874 scope.go:117] "RemoveContainer" containerID="073a06bf9d5eee431c2516dc49a8fcde6070a48fe43b4707401407d6c95cd9cf" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.786485 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-pd7kk" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.793224 4874 generic.go:334] "Generic (PLEG): container finished" podID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerID="5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3" exitCode=0 Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.794292 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerDied","Data":"5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3"} Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.801058 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.801042028 podStartE2EDuration="4.801042028s" podCreationTimestamp="2026-02-17 16:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:31.798680571 +0000 UTC m=+1462.093069142" watchObservedRunningTime="2026-02-17 16:27:31.801042028 +0000 UTC m=+1462.095430589" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.835948 4874 scope.go:117] "RemoveContainer" containerID="98e8ee24c36aa88aac7301c5560bf90638af7061793244e04fc5395ccb1fa82d" Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.841563 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pd7kk"] Feb 17 16:27:31 crc kubenswrapper[4874]: I0217 16:27:31.855736 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-pd7kk"] Feb 17 16:27:32 crc kubenswrapper[4874]: I0217 16:27:32.472148 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e465d4-50df-419e-b724-3e6b957613e5" path="/var/lib/kubelet/pods/f3e465d4-50df-419e-b724-3e6b957613e5/volumes" Feb 17 16:27:32 crc kubenswrapper[4874]: I0217 16:27:32.824674 4874 generic.go:334] "Generic (PLEG): container finished" podID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerID="93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7" exitCode=0 Feb 17 16:27:32 crc kubenswrapper[4874]: I0217 16:27:32.825054 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerDied","Data":"93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7"} Feb 17 16:27:33 crc kubenswrapper[4874]: I0217 16:27:33.843234 4874 generic.go:334] "Generic (PLEG): container finished" podID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerID="38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036" exitCode=0 Feb 17 16:27:33 crc kubenswrapper[4874]: I0217 16:27:33.843327 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerDied","Data":"38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036"} Feb 17 16:27:33 crc kubenswrapper[4874]: I0217 16:27:33.848117 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerStarted","Data":"368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6"} Feb 17 16:27:35 crc kubenswrapper[4874]: I0217 16:27:35.870446 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerStarted","Data":"25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a"} Feb 17 16:27:35 crc kubenswrapper[4874]: I0217 16:27:35.872969 4874 generic.go:334] "Generic (PLEG): container finished" podID="677d7b63-59f1-4829-9478-f59253741cbc" containerID="500bcb02302837a39c1f56bacbc15e09e11785af7b0b611384cca00f2bc6ea82" exitCode=0 Feb 17 16:27:35 crc kubenswrapper[4874]: I0217 16:27:35.873022 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8872n" event={"ID":"677d7b63-59f1-4829-9478-f59253741cbc","Type":"ContainerDied","Data":"500bcb02302837a39c1f56bacbc15e09e11785af7b0b611384cca00f2bc6ea82"} Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.305329 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.442793 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-combined-ca-bundle\") pod \"677d7b63-59f1-4829-9478-f59253741cbc\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.443191 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-config-data\") pod \"677d7b63-59f1-4829-9478-f59253741cbc\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.443215 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-scripts\") pod \"677d7b63-59f1-4829-9478-f59253741cbc\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.443308 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9966\" (UniqueName: \"kubernetes.io/projected/677d7b63-59f1-4829-9478-f59253741cbc-kube-api-access-b9966\") pod \"677d7b63-59f1-4829-9478-f59253741cbc\" (UID: \"677d7b63-59f1-4829-9478-f59253741cbc\") " Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.449005 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677d7b63-59f1-4829-9478-f59253741cbc-kube-api-access-b9966" (OuterVolumeSpecName: "kube-api-access-b9966") pod "677d7b63-59f1-4829-9478-f59253741cbc" (UID: "677d7b63-59f1-4829-9478-f59253741cbc"). InnerVolumeSpecName "kube-api-access-b9966". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.457512 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-scripts" (OuterVolumeSpecName: "scripts") pod "677d7b63-59f1-4829-9478-f59253741cbc" (UID: "677d7b63-59f1-4829-9478-f59253741cbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.484252 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "677d7b63-59f1-4829-9478-f59253741cbc" (UID: "677d7b63-59f1-4829-9478-f59253741cbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.504688 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-config-data" (OuterVolumeSpecName: "config-data") pod "677d7b63-59f1-4829-9478-f59253741cbc" (UID: "677d7b63-59f1-4829-9478-f59253741cbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.546558 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.546595 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.546606 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677d7b63-59f1-4829-9478-f59253741cbc-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.546617 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9966\" (UniqueName: \"kubernetes.io/projected/677d7b63-59f1-4829-9478-f59253741cbc-kube-api-access-b9966\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.906701 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerStarted","Data":"75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da"} Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.909547 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8872n" event={"ID":"677d7b63-59f1-4829-9478-f59253741cbc","Type":"ContainerDied","Data":"01881f7e981ab251b5ec122cfae70c668593f6be992ff9a1b078333faf81aae0"} Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.909571 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01881f7e981ab251b5ec122cfae70c668593f6be992ff9a1b078333faf81aae0" Feb 17 16:27:37 crc kubenswrapper[4874]: I0217 16:27:37.909629 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8872n" Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.038512 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.038583 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.105991 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.106310 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5ef5b1fe-9e55-4310-b49b-75334cac9bb7" containerName="nova-scheduler-scheduler" containerID="cri-o://2b396139b3ab54668f220ada16a5b77714915b0727a2f9b6278e943319aa416d" gracePeriod=30 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.124125 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.143977 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.144228 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-log" containerID="cri-o://e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318" gracePeriod=30 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.144708 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-metadata" containerID="cri-o://734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7" gracePeriod=30 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.929831 4874 generic.go:334] "Generic (PLEG): container finished" podID="b101148a-34d1-4cff-949a-0432ee3225b1" containerID="e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318" exitCode=143 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.930267 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b101148a-34d1-4cff-949a-0432ee3225b1","Type":"ContainerDied","Data":"e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318"} Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.934427 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-log" containerID="cri-o://fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa" gracePeriod=30 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.934749 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-api" containerID="cri-o://a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4" gracePeriod=30 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.934896 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="ceilometer-central-agent" containerID="cri-o://368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6" gracePeriod=30 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.934991 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="proxy-httpd" containerID="cri-o://022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26" gracePeriod=30 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.934999 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerStarted","Data":"022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26"} Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.935033 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.935045 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="sg-core" containerID="cri-o://75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da" gracePeriod=30 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.935095 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="ceilometer-notification-agent" containerID="cri-o://25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a" gracePeriod=30 Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.955175 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.253:8774/\": EOF" Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.955289 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.253:8774/\": EOF" Feb 17 16:27:38 crc kubenswrapper[4874]: I0217 16:27:38.968649 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.685184814 podStartE2EDuration="10.968630587s" podCreationTimestamp="2026-02-17 16:27:28 +0000 UTC" firstStartedPulling="2026-02-17 16:27:29.886222089 +0000 UTC m=+1460.180610650" lastFinishedPulling="2026-02-17 16:27:38.169667862 +0000 UTC m=+1468.464056423" observedRunningTime="2026-02-17 16:27:38.958593932 +0000 UTC m=+1469.252982513" watchObservedRunningTime="2026-02-17 16:27:38.968630587 +0000 UTC m=+1469.263019138" Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.945957 4874 generic.go:334] "Generic (PLEG): container finished" podID="5ef5b1fe-9e55-4310-b49b-75334cac9bb7" containerID="2b396139b3ab54668f220ada16a5b77714915b0727a2f9b6278e943319aa416d" exitCode=0 Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.946200 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5ef5b1fe-9e55-4310-b49b-75334cac9bb7","Type":"ContainerDied","Data":"2b396139b3ab54668f220ada16a5b77714915b0727a2f9b6278e943319aa416d"} Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.946223 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5ef5b1fe-9e55-4310-b49b-75334cac9bb7","Type":"ContainerDied","Data":"75b5a2431674859079367a9d64cda1964d55989d47c84a306148f661f60d486d"} Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.946234 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b5a2431674859079367a9d64cda1964d55989d47c84a306148f661f60d486d" Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.948272 4874 generic.go:334] "Generic (PLEG): container finished" podID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerID="fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa" exitCode=143 Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.948365 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ecfba-048e-476b-9cc9-dd1eda535ab1","Type":"ContainerDied","Data":"fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa"} Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.951432 4874 generic.go:334] "Generic (PLEG): container finished" podID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerID="022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26" exitCode=0 Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.951450 4874 generic.go:334] "Generic (PLEG): container finished" podID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerID="75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da" exitCode=2 Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.951457 4874 generic.go:334] "Generic (PLEG): container finished" podID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerID="25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a" exitCode=0 Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.951470 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerDied","Data":"022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26"} Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.951485 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerDied","Data":"75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da"} Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.951494 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerDied","Data":"25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a"} Feb 17 16:27:39 crc kubenswrapper[4874]: I0217 16:27:39.992350 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.113704 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-config-data\") pod \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.113993 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-combined-ca-bundle\") pod \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.114152 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkdp6\" (UniqueName: \"kubernetes.io/projected/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-kube-api-access-mkdp6\") pod \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\" (UID: \"5ef5b1fe-9e55-4310-b49b-75334cac9bb7\") " Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.119150 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-kube-api-access-mkdp6" (OuterVolumeSpecName: "kube-api-access-mkdp6") pod "5ef5b1fe-9e55-4310-b49b-75334cac9bb7" (UID: "5ef5b1fe-9e55-4310-b49b-75334cac9bb7"). InnerVolumeSpecName "kube-api-access-mkdp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.172006 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-config-data" (OuterVolumeSpecName: "config-data") pod "5ef5b1fe-9e55-4310-b49b-75334cac9bb7" (UID: "5ef5b1fe-9e55-4310-b49b-75334cac9bb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.178321 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ef5b1fe-9e55-4310-b49b-75334cac9bb7" (UID: "5ef5b1fe-9e55-4310-b49b-75334cac9bb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.217289 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkdp6\" (UniqueName: \"kubernetes.io/projected/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-kube-api-access-mkdp6\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.217327 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.217343 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef5b1fe-9e55-4310-b49b-75334cac9bb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.961851 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:27:40 crc kubenswrapper[4874]: I0217 16:27:40.989873 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.007698 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.044223 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:41 crc kubenswrapper[4874]: E0217 16:27:41.044900 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677d7b63-59f1-4829-9478-f59253741cbc" containerName="nova-manage" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.044922 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="677d7b63-59f1-4829-9478-f59253741cbc" containerName="nova-manage" Feb 17 16:27:41 crc kubenswrapper[4874]: E0217 16:27:41.044957 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e465d4-50df-419e-b724-3e6b957613e5" containerName="dnsmasq-dns" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.044965 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e465d4-50df-419e-b724-3e6b957613e5" containerName="dnsmasq-dns" Feb 17 16:27:41 crc kubenswrapper[4874]: E0217 16:27:41.044977 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e465d4-50df-419e-b724-3e6b957613e5" containerName="init" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.044986 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e465d4-50df-419e-b724-3e6b957613e5" containerName="init" Feb 17 16:27:41 crc kubenswrapper[4874]: E0217 16:27:41.045023 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef5b1fe-9e55-4310-b49b-75334cac9bb7" containerName="nova-scheduler-scheduler" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.045031 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef5b1fe-9e55-4310-b49b-75334cac9bb7" containerName="nova-scheduler-scheduler" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.045346 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e465d4-50df-419e-b724-3e6b957613e5" containerName="dnsmasq-dns" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.045384 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="677d7b63-59f1-4829-9478-f59253741cbc" containerName="nova-manage" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.045406 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef5b1fe-9e55-4310-b49b-75334cac9bb7" containerName="nova-scheduler-scheduler" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.046386 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.055061 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.064748 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.141364 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2092ca-d8a4-49b2-a40f-5a487ebcdab0-config-data\") pod \"nova-scheduler-0\" (UID: \"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.141472 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2092ca-d8a4-49b2-a40f-5a487ebcdab0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.141532 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wrtz\" (UniqueName: \"kubernetes.io/projected/2e2092ca-d8a4-49b2-a40f-5a487ebcdab0-kube-api-access-6wrtz\") pod \"nova-scheduler-0\" (UID: \"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.243338 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wrtz\" (UniqueName: \"kubernetes.io/projected/2e2092ca-d8a4-49b2-a40f-5a487ebcdab0-kube-api-access-6wrtz\") pod \"nova-scheduler-0\" (UID: \"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.243677 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2092ca-d8a4-49b2-a40f-5a487ebcdab0-config-data\") pod \"nova-scheduler-0\" (UID: \"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.243757 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2092ca-d8a4-49b2-a40f-5a487ebcdab0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.248777 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2092ca-d8a4-49b2-a40f-5a487ebcdab0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.249911 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2092ca-d8a4-49b2-a40f-5a487ebcdab0-config-data\") pod \"nova-scheduler-0\" (UID: \"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.261439 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wrtz\" (UniqueName: \"kubernetes.io/projected/2e2092ca-d8a4-49b2-a40f-5a487ebcdab0-kube-api-access-6wrtz\") pod \"nova-scheduler-0\" (UID: \"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0\") " pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.361894 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.428832 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": dial tcp 10.217.0.246:8775: connect: connection refused" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.429343 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": dial tcp 10.217.0.246:8775: connect: connection refused" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.888696 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.970437 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-combined-ca-bundle\") pod \"b101148a-34d1-4cff-949a-0432ee3225b1\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.970503 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-config-data\") pod \"b101148a-34d1-4cff-949a-0432ee3225b1\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.970553 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b101148a-34d1-4cff-949a-0432ee3225b1-logs\") pod \"b101148a-34d1-4cff-949a-0432ee3225b1\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.970778 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-656fj\" (UniqueName: \"kubernetes.io/projected/b101148a-34d1-4cff-949a-0432ee3225b1-kube-api-access-656fj\") pod \"b101148a-34d1-4cff-949a-0432ee3225b1\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.970808 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-nova-metadata-tls-certs\") pod \"b101148a-34d1-4cff-949a-0432ee3225b1\" (UID: \"b101148a-34d1-4cff-949a-0432ee3225b1\") " Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.972488 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b101148a-34d1-4cff-949a-0432ee3225b1-logs" (OuterVolumeSpecName: "logs") pod "b101148a-34d1-4cff-949a-0432ee3225b1" (UID: "b101148a-34d1-4cff-949a-0432ee3225b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.978550 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b101148a-34d1-4cff-949a-0432ee3225b1-kube-api-access-656fj" (OuterVolumeSpecName: "kube-api-access-656fj") pod "b101148a-34d1-4cff-949a-0432ee3225b1" (UID: "b101148a-34d1-4cff-949a-0432ee3225b1"). InnerVolumeSpecName "kube-api-access-656fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.981837 4874 generic.go:334] "Generic (PLEG): container finished" podID="b101148a-34d1-4cff-949a-0432ee3225b1" containerID="734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7" exitCode=0 Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.981878 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b101148a-34d1-4cff-949a-0432ee3225b1","Type":"ContainerDied","Data":"734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7"} Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.981902 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b101148a-34d1-4cff-949a-0432ee3225b1","Type":"ContainerDied","Data":"c218000637d54061efce014286d64f1cb601ea3eb618838539370b1e11994463"} Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.981919 4874 scope.go:117] "RemoveContainer" containerID="734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7" Feb 17 16:27:41 crc kubenswrapper[4874]: I0217 16:27:41.982033 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.032484 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.038727 4874 scope.go:117] "RemoveContainer" containerID="e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.044651 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b101148a-34d1-4cff-949a-0432ee3225b1" (UID: "b101148a-34d1-4cff-949a-0432ee3225b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.056556 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-config-data" (OuterVolumeSpecName: "config-data") pod "b101148a-34d1-4cff-949a-0432ee3225b1" (UID: "b101148a-34d1-4cff-949a-0432ee3225b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.073869 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-656fj\" (UniqueName: \"kubernetes.io/projected/b101148a-34d1-4cff-949a-0432ee3225b1-kube-api-access-656fj\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.073896 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.073906 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.073916 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b101148a-34d1-4cff-949a-0432ee3225b1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.092376 4874 scope.go:117] "RemoveContainer" containerID="734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7" Feb 17 16:27:42 crc kubenswrapper[4874]: E0217 16:27:42.092870 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7\": container with ID starting with 734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7 not found: ID does not exist" containerID="734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.092955 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7"} err="failed to get container status \"734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7\": rpc error: code = NotFound desc = could not find container \"734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7\": container with ID starting with 734b6703b72dfd578de3c36a85491658483c7f9716f1b277c9d7a864eb5d1ab7 not found: ID does not exist" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.092981 4874 scope.go:117] "RemoveContainer" containerID="e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318" Feb 17 16:27:42 crc kubenswrapper[4874]: E0217 16:27:42.093776 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318\": container with ID starting with e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318 not found: ID does not exist" containerID="e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.093814 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318"} err="failed to get container status \"e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318\": rpc error: code = NotFound desc = could not find container \"e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318\": container with ID starting with e9b69bdb44c1393d68a4f9e0f2499c8b9bd71113a70834b889a225b54655e318 not found: ID does not exist" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.097903 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b101148a-34d1-4cff-949a-0432ee3225b1" (UID: "b101148a-34d1-4cff-949a-0432ee3225b1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.178387 4874 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b101148a-34d1-4cff-949a-0432ee3225b1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.357656 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.370958 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.407840 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:27:42 crc kubenswrapper[4874]: E0217 16:27:42.408433 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-metadata" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.408454 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-metadata" Feb 17 16:27:42 crc kubenswrapper[4874]: E0217 16:27:42.408475 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-log" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.408483 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-log" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.408700 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-metadata" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.408732 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" containerName="nova-metadata-log" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.413531 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.415547 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.415913 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.436636 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.470297 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef5b1fe-9e55-4310-b49b-75334cac9bb7" path="/var/lib/kubelet/pods/5ef5b1fe-9e55-4310-b49b-75334cac9bb7/volumes" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.471032 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b101148a-34d1-4cff-949a-0432ee3225b1" path="/var/lib/kubelet/pods/b101148a-34d1-4cff-949a-0432ee3225b1/volumes" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.517318 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.590697 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c35948a-7c46-4998-9156-2fdedcaac5e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.590771 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c35948a-7c46-4998-9156-2fdedcaac5e9-config-data\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.590873 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5sst\" (UniqueName: \"kubernetes.io/projected/5c35948a-7c46-4998-9156-2fdedcaac5e9-kube-api-access-d5sst\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.590899 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c35948a-7c46-4998-9156-2fdedcaac5e9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.590921 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c35948a-7c46-4998-9156-2fdedcaac5e9-logs\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.693020 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-sg-core-conf-yaml\") pod \"e96e4768-c5ef-4719-aabf-25094ae858c1\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.693450 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-combined-ca-bundle\") pod \"e96e4768-c5ef-4719-aabf-25094ae858c1\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.693483 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rt4q\" (UniqueName: \"kubernetes.io/projected/e96e4768-c5ef-4719-aabf-25094ae858c1-kube-api-access-2rt4q\") pod \"e96e4768-c5ef-4719-aabf-25094ae858c1\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.693578 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-log-httpd\") pod \"e96e4768-c5ef-4719-aabf-25094ae858c1\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.693672 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-config-data\") pod \"e96e4768-c5ef-4719-aabf-25094ae858c1\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.693707 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-run-httpd\") pod \"e96e4768-c5ef-4719-aabf-25094ae858c1\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.693747 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-scripts\") pod \"e96e4768-c5ef-4719-aabf-25094ae858c1\" (UID: \"e96e4768-c5ef-4719-aabf-25094ae858c1\") " Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.694182 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c35948a-7c46-4998-9156-2fdedcaac5e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.694285 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c35948a-7c46-4998-9156-2fdedcaac5e9-config-data\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.694405 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5sst\" (UniqueName: \"kubernetes.io/projected/5c35948a-7c46-4998-9156-2fdedcaac5e9-kube-api-access-d5sst\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.694428 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c35948a-7c46-4998-9156-2fdedcaac5e9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.694444 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c35948a-7c46-4998-9156-2fdedcaac5e9-logs\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.694281 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e96e4768-c5ef-4719-aabf-25094ae858c1" (UID: "e96e4768-c5ef-4719-aabf-25094ae858c1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.694996 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c35948a-7c46-4998-9156-2fdedcaac5e9-logs\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.695587 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e96e4768-c5ef-4719-aabf-25094ae858c1" (UID: "e96e4768-c5ef-4719-aabf-25094ae858c1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.698898 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c35948a-7c46-4998-9156-2fdedcaac5e9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.698924 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-scripts" (OuterVolumeSpecName: "scripts") pod "e96e4768-c5ef-4719-aabf-25094ae858c1" (UID: "e96e4768-c5ef-4719-aabf-25094ae858c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.701591 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c35948a-7c46-4998-9156-2fdedcaac5e9-config-data\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.706715 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c35948a-7c46-4998-9156-2fdedcaac5e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.709717 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e96e4768-c5ef-4719-aabf-25094ae858c1-kube-api-access-2rt4q" (OuterVolumeSpecName: "kube-api-access-2rt4q") pod "e96e4768-c5ef-4719-aabf-25094ae858c1" (UID: "e96e4768-c5ef-4719-aabf-25094ae858c1"). InnerVolumeSpecName "kube-api-access-2rt4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.712438 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5sst\" (UniqueName: \"kubernetes.io/projected/5c35948a-7c46-4998-9156-2fdedcaac5e9-kube-api-access-d5sst\") pod \"nova-metadata-0\" (UID: \"5c35948a-7c46-4998-9156-2fdedcaac5e9\") " pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.735873 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e96e4768-c5ef-4719-aabf-25094ae858c1" (UID: "e96e4768-c5ef-4719-aabf-25094ae858c1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.795802 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e96e4768-c5ef-4719-aabf-25094ae858c1" (UID: "e96e4768-c5ef-4719-aabf-25094ae858c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.796437 4874 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.796477 4874 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e96e4768-c5ef-4719-aabf-25094ae858c1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.796490 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.796502 4874 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.796514 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.796526 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rt4q\" (UniqueName: \"kubernetes.io/projected/e96e4768-c5ef-4719-aabf-25094ae858c1-kube-api-access-2rt4q\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.813791 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.835131 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-config-data" (OuterVolumeSpecName: "config-data") pod "e96e4768-c5ef-4719-aabf-25094ae858c1" (UID: "e96e4768-c5ef-4719-aabf-25094ae858c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:42 crc kubenswrapper[4874]: I0217 16:27:42.901909 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e96e4768-c5ef-4719-aabf-25094ae858c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.003747 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0","Type":"ContainerStarted","Data":"f54e51fe85df3b2a123dcd040c10f2dac5114480375b06bec3cf8771e8022448"} Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.003985 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2e2092ca-d8a4-49b2-a40f-5a487ebcdab0","Type":"ContainerStarted","Data":"b563dee67d916cce78f966bf6203c6dbb4a789955e0ef91abbf20a877586b9cd"} Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.009096 4874 generic.go:334] "Generic (PLEG): container finished" podID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerID="368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6" exitCode=0 Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.009144 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerDied","Data":"368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6"} Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.009173 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e96e4768-c5ef-4719-aabf-25094ae858c1","Type":"ContainerDied","Data":"29cb59c9af13c4b0078d664df658b9e458b35b6e66f3f22586f5de79f45e79ac"} Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.009194 4874 scope.go:117] "RemoveContainer" containerID="022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.009321 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.033062 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.033046571 podStartE2EDuration="3.033046571s" podCreationTimestamp="2026-02-17 16:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:43.022222407 +0000 UTC m=+1473.316610968" watchObservedRunningTime="2026-02-17 16:27:43.033046571 +0000 UTC m=+1473.327435132" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.048827 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.089123 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.090680 4874 scope.go:117] "RemoveContainer" containerID="75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.106209 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:43 crc kubenswrapper[4874]: E0217 16:27:43.106778 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="sg-core" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.106797 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="sg-core" Feb 17 16:27:43 crc kubenswrapper[4874]: E0217 16:27:43.106825 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="ceilometer-notification-agent" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.106832 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="ceilometer-notification-agent" Feb 17 16:27:43 crc kubenswrapper[4874]: E0217 16:27:43.106846 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="proxy-httpd" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.106852 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="proxy-httpd" Feb 17 16:27:43 crc kubenswrapper[4874]: E0217 16:27:43.106862 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="ceilometer-central-agent" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.106868 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="ceilometer-central-agent" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.107216 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="ceilometer-notification-agent" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.107269 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="sg-core" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.107282 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="proxy-httpd" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.107298 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" containerName="ceilometer-central-agent" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.109532 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.112772 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.113229 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.119018 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.123742 4874 scope.go:117] "RemoveContainer" containerID="25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.152423 4874 scope.go:117] "RemoveContainer" containerID="368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.188322 4874 scope.go:117] "RemoveContainer" containerID="022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26" Feb 17 16:27:43 crc kubenswrapper[4874]: E0217 16:27:43.188788 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26\": container with ID starting with 022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26 not found: ID does not exist" containerID="022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.188827 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26"} err="failed to get container status \"022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26\": rpc error: code = NotFound desc = could not find container \"022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26\": container with ID starting with 022f777294de1fd465b901ef3ee739094c66037093456e56cc2ea9ea7bfb9b26 not found: ID does not exist" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.188852 4874 scope.go:117] "RemoveContainer" containerID="75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da" Feb 17 16:27:43 crc kubenswrapper[4874]: E0217 16:27:43.189222 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da\": container with ID starting with 75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da not found: ID does not exist" containerID="75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.189252 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da"} err="failed to get container status \"75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da\": rpc error: code = NotFound desc = could not find container \"75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da\": container with ID starting with 75adcb332b7ee66fec732fc40ec8733c1193d9b6b0a8bd3cb0b1f13f68c056da not found: ID does not exist" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.189286 4874 scope.go:117] "RemoveContainer" containerID="25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a" Feb 17 16:27:43 crc kubenswrapper[4874]: E0217 16:27:43.189879 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a\": container with ID starting with 25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a not found: ID does not exist" containerID="25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.189909 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a"} err="failed to get container status \"25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a\": rpc error: code = NotFound desc = could not find container \"25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a\": container with ID starting with 25a4e2584009a21c42ad0e81136a78b0ea0fc2563d32832d384b33a12514248a not found: ID does not exist" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.189926 4874 scope.go:117] "RemoveContainer" containerID="368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6" Feb 17 16:27:43 crc kubenswrapper[4874]: E0217 16:27:43.192192 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6\": container with ID starting with 368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6 not found: ID does not exist" containerID="368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.192225 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6"} err="failed to get container status \"368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6\": rpc error: code = NotFound desc = could not find container \"368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6\": container with ID starting with 368663f1117f9cc4af033c9f0b80cb7d7220bb7e6b9ea51bd00c7105c1d39bb6 not found: ID does not exist" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.210245 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-scripts\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.210297 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.210317 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-run-httpd\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.210446 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-log-httpd\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.210476 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzz6w\" (UniqueName: \"kubernetes.io/projected/3513f094-083b-4e5c-a1c9-b59e8c999e8e-kube-api-access-jzz6w\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.210537 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.210567 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-config-data\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.312084 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-scripts\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.312141 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.312161 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-run-httpd\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.312300 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-log-httpd\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.312330 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzz6w\" (UniqueName: \"kubernetes.io/projected/3513f094-083b-4e5c-a1c9-b59e8c999e8e-kube-api-access-jzz6w\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.312395 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.312428 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-config-data\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.313249 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-run-httpd\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.313711 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-log-httpd\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.316421 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.317796 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-config-data\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.318429 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.318480 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-scripts\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.318557 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: W0217 16:27:43.325231 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c35948a_7c46_4998_9156_2fdedcaac5e9.slice/crio-633eed47b52c12538634bc623f3e63f2f74909aa689d4f8f4b3b79c1dc798656 WatchSource:0}: Error finding container 633eed47b52c12538634bc623f3e63f2f74909aa689d4f8f4b3b79c1dc798656: Status 404 returned error can't find the container with id 633eed47b52c12538634bc623f3e63f2f74909aa689d4f8f4b3b79c1dc798656 Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.331167 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzz6w\" (UniqueName: \"kubernetes.io/projected/3513f094-083b-4e5c-a1c9-b59e8c999e8e-kube-api-access-jzz6w\") pod \"ceilometer-0\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.438553 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:43 crc kubenswrapper[4874]: I0217 16:27:43.916979 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.020931 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerStarted","Data":"f7760268f105dfc5029331e18e92c879b99aeb203698909b513db24a7aaa36aa"} Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.023755 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c35948a-7c46-4998-9156-2fdedcaac5e9","Type":"ContainerStarted","Data":"e05df2583f9a974b41d152cdf96e788581475f3a20ea97b929bee33fbed561c4"} Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.023816 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c35948a-7c46-4998-9156-2fdedcaac5e9","Type":"ContainerStarted","Data":"28075e9b52515b4a16ceda1b3d73f2f2a2ff5d8e4fdb950ef7aebf9d984acea6"} Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.023833 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c35948a-7c46-4998-9156-2fdedcaac5e9","Type":"ContainerStarted","Data":"633eed47b52c12538634bc623f3e63f2f74909aa689d4f8f4b3b79c1dc798656"} Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.054884 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.054857449 podStartE2EDuration="2.054857449s" podCreationTimestamp="2026-02-17 16:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:44.041934143 +0000 UTC m=+1474.336322704" watchObservedRunningTime="2026-02-17 16:27:44.054857449 +0000 UTC m=+1474.349246020" Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.469963 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e96e4768-c5ef-4719-aabf-25094ae858c1" path="/var/lib/kubelet/pods/e96e4768-c5ef-4719-aabf-25094ae858c1/volumes" Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.827273 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.957966 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-logs\") pod \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.958252 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-internal-tls-certs\") pod \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.958519 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbs7g\" (UniqueName: \"kubernetes.io/projected/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-kube-api-access-fbs7g\") pod \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.958680 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-config-data\") pod \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.958825 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-public-tls-certs\") pod \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.958931 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-combined-ca-bundle\") pod \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\" (UID: \"2d7ecfba-048e-476b-9cc9-dd1eda535ab1\") " Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.960966 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-logs" (OuterVolumeSpecName: "logs") pod "2d7ecfba-048e-476b-9cc9-dd1eda535ab1" (UID: "2d7ecfba-048e-476b-9cc9-dd1eda535ab1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:44 crc kubenswrapper[4874]: I0217 16:27:44.979648 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-kube-api-access-fbs7g" (OuterVolumeSpecName: "kube-api-access-fbs7g") pod "2d7ecfba-048e-476b-9cc9-dd1eda535ab1" (UID: "2d7ecfba-048e-476b-9cc9-dd1eda535ab1"). InnerVolumeSpecName "kube-api-access-fbs7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.054683 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d7ecfba-048e-476b-9cc9-dd1eda535ab1" (UID: "2d7ecfba-048e-476b-9cc9-dd1eda535ab1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.065916 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.066354 4874 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.066444 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbs7g\" (UniqueName: \"kubernetes.io/projected/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-kube-api-access-fbs7g\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.111404 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-config-data" (OuterVolumeSpecName: "config-data") pod "2d7ecfba-048e-476b-9cc9-dd1eda535ab1" (UID: "2d7ecfba-048e-476b-9cc9-dd1eda535ab1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.111617 4874 generic.go:334] "Generic (PLEG): container finished" podID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerID="a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4" exitCode=0 Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.111706 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ecfba-048e-476b-9cc9-dd1eda535ab1","Type":"ContainerDied","Data":"a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4"} Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.111734 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d7ecfba-048e-476b-9cc9-dd1eda535ab1","Type":"ContainerDied","Data":"aed5a4b43f37fd6bbeb5cff8e9c1df7e164f06038d60a68dde11d5aeeaf71fc6"} Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.111754 4874 scope.go:117] "RemoveContainer" containerID="a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.111922 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.124282 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerStarted","Data":"f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b"} Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.134102 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2d7ecfba-048e-476b-9cc9-dd1eda535ab1" (UID: "2d7ecfba-048e-476b-9cc9-dd1eda535ab1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.159039 4874 scope.go:117] "RemoveContainer" containerID="fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.167191 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2d7ecfba-048e-476b-9cc9-dd1eda535ab1" (UID: "2d7ecfba-048e-476b-9cc9-dd1eda535ab1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.169185 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.169214 4874 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.169227 4874 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d7ecfba-048e-476b-9cc9-dd1eda535ab1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.189760 4874 scope.go:117] "RemoveContainer" containerID="a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4" Feb 17 16:27:45 crc kubenswrapper[4874]: E0217 16:27:45.190313 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4\": container with ID starting with a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4 not found: ID does not exist" containerID="a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.190344 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4"} err="failed to get container status \"a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4\": rpc error: code = NotFound desc = could not find container \"a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4\": container with ID starting with a5a1be6cdb3cd2d721024c5b0986a560fa78c5021260b6fd7813df6c6526ace4 not found: ID does not exist" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.190364 4874 scope.go:117] "RemoveContainer" containerID="fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa" Feb 17 16:27:45 crc kubenswrapper[4874]: E0217 16:27:45.190571 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa\": container with ID starting with fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa not found: ID does not exist" containerID="fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.190636 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa"} err="failed to get container status \"fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa\": rpc error: code = NotFound desc = could not find container \"fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa\": container with ID starting with fbe7fbfe24feef88aad7c07bf47c23957fa44a04d137d88e82b5cc98c1a086fa not found: ID does not exist" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.477390 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.490222 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.502206 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:45 crc kubenswrapper[4874]: E0217 16:27:45.502764 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-api" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.502783 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-api" Feb 17 16:27:45 crc kubenswrapper[4874]: E0217 16:27:45.502798 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-log" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.502804 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-log" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.503052 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-api" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.503099 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" containerName="nova-api-log" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.504338 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.506099 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.506387 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.506581 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.515098 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.679424 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.680013 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-config-data\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.680184 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8mcj\" (UniqueName: \"kubernetes.io/projected/482dd97c-5a3b-4da4-98e4-f89c00605948-kube-api-access-h8mcj\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.680294 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-internal-tls-certs\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.680535 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-public-tls-certs\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.680688 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482dd97c-5a3b-4da4-98e4-f89c00605948-logs\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.782719 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482dd97c-5a3b-4da4-98e4-f89c00605948-logs\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.782803 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.782859 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-config-data\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.782891 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8mcj\" (UniqueName: \"kubernetes.io/projected/482dd97c-5a3b-4da4-98e4-f89c00605948-kube-api-access-h8mcj\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.782932 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-internal-tls-certs\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.783105 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-public-tls-certs\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.783340 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/482dd97c-5a3b-4da4-98e4-f89c00605948-logs\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.788157 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.788393 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-config-data\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.792578 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-public-tls-certs\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.792702 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/482dd97c-5a3b-4da4-98e4-f89c00605948-internal-tls-certs\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.804025 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8mcj\" (UniqueName: \"kubernetes.io/projected/482dd97c-5a3b-4da4-98e4-f89c00605948-kube-api-access-h8mcj\") pod \"nova-api-0\" (UID: \"482dd97c-5a3b-4da4-98e4-f89c00605948\") " pod="openstack/nova-api-0" Feb 17 16:27:45 crc kubenswrapper[4874]: I0217 16:27:45.821638 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:27:46 crc kubenswrapper[4874]: I0217 16:27:46.148926 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerStarted","Data":"f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f"} Feb 17 16:27:46 crc kubenswrapper[4874]: I0217 16:27:46.369572 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:27:46 crc kubenswrapper[4874]: I0217 16:27:46.381142 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:27:46 crc kubenswrapper[4874]: W0217 16:27:46.395023 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod482dd97c_5a3b_4da4_98e4_f89c00605948.slice/crio-b11b11224f2597e0f7f1f419da1c9b0ad533cd1549bf8962196930b51fb6313e WatchSource:0}: Error finding container b11b11224f2597e0f7f1f419da1c9b0ad533cd1549bf8962196930b51fb6313e: Status 404 returned error can't find the container with id b11b11224f2597e0f7f1f419da1c9b0ad533cd1549bf8962196930b51fb6313e Feb 17 16:27:46 crc kubenswrapper[4874]: I0217 16:27:46.473363 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d7ecfba-048e-476b-9cc9-dd1eda535ab1" path="/var/lib/kubelet/pods/2d7ecfba-048e-476b-9cc9-dd1eda535ab1/volumes" Feb 17 16:27:47 crc kubenswrapper[4874]: I0217 16:27:47.188667 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerStarted","Data":"8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea"} Feb 17 16:27:47 crc kubenswrapper[4874]: I0217 16:27:47.192738 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"482dd97c-5a3b-4da4-98e4-f89c00605948","Type":"ContainerStarted","Data":"c30a32eb113538b56911b5170fe953b3a7fc3662b5940ab93d6d2faa7aaece53"} Feb 17 16:27:47 crc kubenswrapper[4874]: I0217 16:27:47.192779 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"482dd97c-5a3b-4da4-98e4-f89c00605948","Type":"ContainerStarted","Data":"d0b04e304f78615deccabd4dc2e7e7d61c8cba58173571ba970302dea49f1d24"} Feb 17 16:27:47 crc kubenswrapper[4874]: I0217 16:27:47.192788 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"482dd97c-5a3b-4da4-98e4-f89c00605948","Type":"ContainerStarted","Data":"b11b11224f2597e0f7f1f419da1c9b0ad533cd1549bf8962196930b51fb6313e"} Feb 17 16:27:47 crc kubenswrapper[4874]: I0217 16:27:47.220170 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.220147171 podStartE2EDuration="2.220147171s" podCreationTimestamp="2026-02-17 16:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:47.209791288 +0000 UTC m=+1477.504179869" watchObservedRunningTime="2026-02-17 16:27:47.220147171 +0000 UTC m=+1477.514535732" Feb 17 16:27:47 crc kubenswrapper[4874]: I0217 16:27:47.814744 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:27:47 crc kubenswrapper[4874]: I0217 16:27:47.815125 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:27:48 crc kubenswrapper[4874]: I0217 16:27:48.214148 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerStarted","Data":"dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b"} Feb 17 16:27:48 crc kubenswrapper[4874]: I0217 16:27:48.214236 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:27:48 crc kubenswrapper[4874]: I0217 16:27:48.253754 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.640725199 podStartE2EDuration="5.253727107s" podCreationTimestamp="2026-02-17 16:27:43 +0000 UTC" firstStartedPulling="2026-02-17 16:27:43.931295171 +0000 UTC m=+1474.225683732" lastFinishedPulling="2026-02-17 16:27:47.544297079 +0000 UTC m=+1477.838685640" observedRunningTime="2026-02-17 16:27:48.245904006 +0000 UTC m=+1478.540292587" watchObservedRunningTime="2026-02-17 16:27:48.253727107 +0000 UTC m=+1478.548115718" Feb 17 16:27:51 crc kubenswrapper[4874]: I0217 16:27:51.362427 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:27:51 crc kubenswrapper[4874]: I0217 16:27:51.405573 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:27:52 crc kubenswrapper[4874]: I0217 16:27:52.299196 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:27:52 crc kubenswrapper[4874]: I0217 16:27:52.814376 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:27:52 crc kubenswrapper[4874]: I0217 16:27:52.814423 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:27:53 crc kubenswrapper[4874]: I0217 16:27:53.828288 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c35948a-7c46-4998-9156-2fdedcaac5e9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:27:53 crc kubenswrapper[4874]: I0217 16:27:53.828294 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c35948a-7c46-4998-9156-2fdedcaac5e9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:27:55 crc kubenswrapper[4874]: I0217 16:27:55.822379 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:27:55 crc kubenswrapper[4874]: I0217 16:27:55.823907 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:27:56 crc kubenswrapper[4874]: I0217 16:27:56.841282 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="482dd97c-5a3b-4da4-98e4-f89c00605948" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:27:56 crc kubenswrapper[4874]: I0217 16:27:56.841311 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="482dd97c-5a3b-4da4-98e4-f89c00605948" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.135394 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.184961 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-config-data\") pod \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.185432 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw5h6\" (UniqueName: \"kubernetes.io/projected/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-kube-api-access-pw5h6\") pod \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.185879 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-scripts\") pod \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.186157 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-combined-ca-bundle\") pod \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\" (UID: \"53a7ab1d-25fe-4d79-9778-fe644b1e97b8\") " Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.196603 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-scripts" (OuterVolumeSpecName: "scripts") pod "53a7ab1d-25fe-4d79-9778-fe644b1e97b8" (UID: "53a7ab1d-25fe-4d79-9778-fe644b1e97b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.206796 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-kube-api-access-pw5h6" (OuterVolumeSpecName: "kube-api-access-pw5h6") pod "53a7ab1d-25fe-4d79-9778-fe644b1e97b8" (UID: "53a7ab1d-25fe-4d79-9778-fe644b1e97b8"). InnerVolumeSpecName "kube-api-access-pw5h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.289018 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.289047 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw5h6\" (UniqueName: \"kubernetes.io/projected/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-kube-api-access-pw5h6\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.337130 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-config-data" (OuterVolumeSpecName: "config-data") pod "53a7ab1d-25fe-4d79-9778-fe644b1e97b8" (UID: "53a7ab1d-25fe-4d79-9778-fe644b1e97b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.364977 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53a7ab1d-25fe-4d79-9778-fe644b1e97b8" (UID: "53a7ab1d-25fe-4d79-9778-fe644b1e97b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.368888 4874 generic.go:334] "Generic (PLEG): container finished" podID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerID="c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f" exitCode=137 Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.368930 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerDied","Data":"c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f"} Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.368956 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53a7ab1d-25fe-4d79-9778-fe644b1e97b8","Type":"ContainerDied","Data":"5b3e3aa5bdf56652f19c92feafaeb2f082b6aafcb565058e2e73119cd206b658"} Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.368954 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.368971 4874 scope.go:117] "RemoveContainer" containerID="c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.389897 4874 scope.go:117] "RemoveContainer" containerID="38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.390766 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.390794 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a7ab1d-25fe-4d79-9778-fe644b1e97b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.421567 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.442345 4874 scope.go:117] "RemoveContainer" containerID="5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.445854 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.462170 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 17 16:28:01 crc kubenswrapper[4874]: E0217 16:28:01.462863 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-listener" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.462893 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-listener" Feb 17 16:28:01 crc kubenswrapper[4874]: E0217 16:28:01.462914 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-evaluator" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.462920 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-evaluator" Feb 17 16:28:01 crc kubenswrapper[4874]: E0217 16:28:01.463118 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-api" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.463133 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-api" Feb 17 16:28:01 crc kubenswrapper[4874]: E0217 16:28:01.463168 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-notifier" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.463175 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-notifier" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.463500 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-api" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.463525 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-evaluator" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.463537 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-listener" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.463550 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" containerName="aodh-notifier" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.467357 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.469740 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.470175 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.470500 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-lsrl9" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.470676 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.470832 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.474370 4874 scope.go:117] "RemoveContainer" containerID="93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.485277 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.528984 4874 scope.go:117] "RemoveContainer" containerID="c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f" Feb 17 16:28:01 crc kubenswrapper[4874]: E0217 16:28:01.529618 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f\": container with ID starting with c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f not found: ID does not exist" containerID="c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.529649 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f"} err="failed to get container status \"c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f\": rpc error: code = NotFound desc = could not find container \"c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f\": container with ID starting with c74b99d0a6bc35924d7b05a1e84d8f04f3495a0161a7c466a47449b1ea08513f not found: ID does not exist" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.529691 4874 scope.go:117] "RemoveContainer" containerID="38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036" Feb 17 16:28:01 crc kubenswrapper[4874]: E0217 16:28:01.529943 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036\": container with ID starting with 38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036 not found: ID does not exist" containerID="38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.529968 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036"} err="failed to get container status \"38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036\": rpc error: code = NotFound desc = could not find container \"38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036\": container with ID starting with 38f8355201bcac6e77c5e365370d17706b8f06badf929c89e9b752dcc5eb6036 not found: ID does not exist" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.529982 4874 scope.go:117] "RemoveContainer" containerID="5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3" Feb 17 16:28:01 crc kubenswrapper[4874]: E0217 16:28:01.530424 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3\": container with ID starting with 5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3 not found: ID does not exist" containerID="5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.530450 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3"} err="failed to get container status \"5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3\": rpc error: code = NotFound desc = could not find container \"5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3\": container with ID starting with 5e12003af643286586ae6762076102d354e79e51edd65b6466a1263ccde7abd3 not found: ID does not exist" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.530462 4874 scope.go:117] "RemoveContainer" containerID="93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7" Feb 17 16:28:01 crc kubenswrapper[4874]: E0217 16:28:01.530729 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7\": container with ID starting with 93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7 not found: ID does not exist" containerID="93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.530750 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7"} err="failed to get container status \"93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7\": rpc error: code = NotFound desc = could not find container \"93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7\": container with ID starting with 93972332b50e160e05cf88dadcd3682f9b79107fbb93c98d474cd76ab5234dc7 not found: ID does not exist" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.608868 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-internal-tls-certs\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.608937 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-public-tls-certs\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.609034 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.609122 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm52c\" (UniqueName: \"kubernetes.io/projected/25c79f51-4cde-46f5-b188-618b368f0ccb-kube-api-access-fm52c\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.609174 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-config-data\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.609196 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-scripts\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.711585 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-internal-tls-certs\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.711671 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-public-tls-certs\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.711775 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.711862 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm52c\" (UniqueName: \"kubernetes.io/projected/25c79f51-4cde-46f5-b188-618b368f0ccb-kube-api-access-fm52c\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.711921 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-config-data\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.711951 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-scripts\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.716320 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-internal-tls-certs\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.716886 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-combined-ca-bundle\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.717288 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-scripts\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.719262 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-config-data\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.724545 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c79f51-4cde-46f5-b188-618b368f0ccb-public-tls-certs\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.737822 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm52c\" (UniqueName: \"kubernetes.io/projected/25c79f51-4cde-46f5-b188-618b368f0ccb-kube-api-access-fm52c\") pod \"aodh-0\" (UID: \"25c79f51-4cde-46f5-b188-618b368f0ccb\") " pod="openstack/aodh-0" Feb 17 16:28:01 crc kubenswrapper[4874]: I0217 16:28:01.797964 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:28:02 crc kubenswrapper[4874]: I0217 16:28:02.336706 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:28:02 crc kubenswrapper[4874]: I0217 16:28:02.383222 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"25c79f51-4cde-46f5-b188-618b368f0ccb","Type":"ContainerStarted","Data":"7e33b843a9e49fab20bd6e4ec0a11493606304b865975f641c7244101fd89795"} Feb 17 16:28:02 crc kubenswrapper[4874]: I0217 16:28:02.474511 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a7ab1d-25fe-4d79-9778-fe644b1e97b8" path="/var/lib/kubelet/pods/53a7ab1d-25fe-4d79-9778-fe644b1e97b8/volumes" Feb 17 16:28:02 crc kubenswrapper[4874]: I0217 16:28:02.822061 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:28:02 crc kubenswrapper[4874]: I0217 16:28:02.822873 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:28:02 crc kubenswrapper[4874]: I0217 16:28:02.830800 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:28:02 crc kubenswrapper[4874]: I0217 16:28:02.838066 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:28:04 crc kubenswrapper[4874]: I0217 16:28:04.421085 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"25c79f51-4cde-46f5-b188-618b368f0ccb","Type":"ContainerStarted","Data":"3a082e195e6f601254412f74059c9770cca93cfe457573dc8fb63bb207c69876"} Feb 17 16:28:04 crc kubenswrapper[4874]: I0217 16:28:04.421630 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"25c79f51-4cde-46f5-b188-618b368f0ccb","Type":"ContainerStarted","Data":"c287ddb4582dbf7c3f8af4492916419265023da72511d018e8518a7471f999d9"} Feb 17 16:28:05 crc kubenswrapper[4874]: I0217 16:28:05.431773 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"25c79f51-4cde-46f5-b188-618b368f0ccb","Type":"ContainerStarted","Data":"dcef66a8646279f174772fe397eef205831c5641e65189f4466c6bef596e5219"} Feb 17 16:28:05 crc kubenswrapper[4874]: I0217 16:28:05.839605 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:28:05 crc kubenswrapper[4874]: I0217 16:28:05.841600 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:28:05 crc kubenswrapper[4874]: I0217 16:28:05.841970 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:28:05 crc kubenswrapper[4874]: I0217 16:28:05.849328 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:28:06 crc kubenswrapper[4874]: I0217 16:28:06.446189 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"25c79f51-4cde-46f5-b188-618b368f0ccb","Type":"ContainerStarted","Data":"ac3b0a3ba6fb0fc2ec6ccb8e7a28ed9565c539fa04501d8a4a690df82fd214f8"} Feb 17 16:28:06 crc kubenswrapper[4874]: I0217 16:28:06.446648 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:28:06 crc kubenswrapper[4874]: I0217 16:28:06.504135 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.088024769 podStartE2EDuration="5.504118689s" podCreationTimestamp="2026-02-17 16:28:01 +0000 UTC" firstStartedPulling="2026-02-17 16:28:02.346802953 +0000 UTC m=+1492.641191504" lastFinishedPulling="2026-02-17 16:28:05.762896863 +0000 UTC m=+1496.057285424" observedRunningTime="2026-02-17 16:28:06.48308732 +0000 UTC m=+1496.777475921" watchObservedRunningTime="2026-02-17 16:28:06.504118689 +0000 UTC m=+1496.798507250" Feb 17 16:28:06 crc kubenswrapper[4874]: I0217 16:28:06.651913 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.740415 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tsb7r"] Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.743099 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.777466 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsb7r"] Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.816460 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-catalog-content\") pod \"redhat-operators-tsb7r\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.816668 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5lsn\" (UniqueName: \"kubernetes.io/projected/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-kube-api-access-n5lsn\") pod \"redhat-operators-tsb7r\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.816867 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-utilities\") pod \"redhat-operators-tsb7r\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.918915 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-catalog-content\") pod \"redhat-operators-tsb7r\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.919287 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5lsn\" (UniqueName: \"kubernetes.io/projected/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-kube-api-access-n5lsn\") pod \"redhat-operators-tsb7r\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.919343 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-utilities\") pod \"redhat-operators-tsb7r\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.919581 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-catalog-content\") pod \"redhat-operators-tsb7r\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.919839 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-utilities\") pod \"redhat-operators-tsb7r\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:08 crc kubenswrapper[4874]: I0217 16:28:08.943557 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5lsn\" (UniqueName: \"kubernetes.io/projected/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-kube-api-access-n5lsn\") pod \"redhat-operators-tsb7r\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:09 crc kubenswrapper[4874]: I0217 16:28:09.072539 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:09 crc kubenswrapper[4874]: I0217 16:28:09.624049 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsb7r"] Feb 17 16:28:10 crc kubenswrapper[4874]: I0217 16:28:10.487266 4874 generic.go:334] "Generic (PLEG): container finished" podID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerID="b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb" exitCode=0 Feb 17 16:28:10 crc kubenswrapper[4874]: I0217 16:28:10.487371 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsb7r" event={"ID":"5bce9f58-6b2c-4af7-8c50-374e27e96f5c","Type":"ContainerDied","Data":"b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb"} Feb 17 16:28:10 crc kubenswrapper[4874]: I0217 16:28:10.487592 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsb7r" event={"ID":"5bce9f58-6b2c-4af7-8c50-374e27e96f5c","Type":"ContainerStarted","Data":"699522bd936221ea1bb0cadd2efea75b7046be518f9969e4ffaa0f94e4a1a76b"} Feb 17 16:28:10 crc kubenswrapper[4874]: I0217 16:28:10.495700 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:28:11 crc kubenswrapper[4874]: I0217 16:28:11.507364 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsb7r" event={"ID":"5bce9f58-6b2c-4af7-8c50-374e27e96f5c","Type":"ContainerStarted","Data":"4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7"} Feb 17 16:28:13 crc kubenswrapper[4874]: I0217 16:28:13.448789 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.018619 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.019289 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e1154a55-d86f-4c56-82d4-4d63c35feceb" containerName="kube-state-metrics" containerID="cri-o://0d58ddd625d4c25c64df1ab80aced90db1da26189e8af0b91a8bc1eedb191b60" gracePeriod=30 Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.123539 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.123775 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="ac66947f-056d-4e83-bcdb-577f72ea0350" containerName="mysqld-exporter" containerID="cri-o://8ce67f4a16e5fed5934bb6f3a35e742f0ba7dc6cba55c72aaed09323d5622f48" gracePeriod=30 Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.633031 4874 generic.go:334] "Generic (PLEG): container finished" podID="e1154a55-d86f-4c56-82d4-4d63c35feceb" containerID="0d58ddd625d4c25c64df1ab80aced90db1da26189e8af0b91a8bc1eedb191b60" exitCode=2 Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.633195 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e1154a55-d86f-4c56-82d4-4d63c35feceb","Type":"ContainerDied","Data":"0d58ddd625d4c25c64df1ab80aced90db1da26189e8af0b91a8bc1eedb191b60"} Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.640771 4874 generic.go:334] "Generic (PLEG): container finished" podID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerID="4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7" exitCode=0 Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.640820 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsb7r" event={"ID":"5bce9f58-6b2c-4af7-8c50-374e27e96f5c","Type":"ContainerDied","Data":"4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7"} Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.667823 4874 generic.go:334] "Generic (PLEG): container finished" podID="ac66947f-056d-4e83-bcdb-577f72ea0350" containerID="8ce67f4a16e5fed5934bb6f3a35e742f0ba7dc6cba55c72aaed09323d5622f48" exitCode=2 Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.667877 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ac66947f-056d-4e83-bcdb-577f72ea0350","Type":"ContainerDied","Data":"8ce67f4a16e5fed5934bb6f3a35e742f0ba7dc6cba55c72aaed09323d5622f48"} Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.743774 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.836124 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.880789 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skpcr\" (UniqueName: \"kubernetes.io/projected/e1154a55-d86f-4c56-82d4-4d63c35feceb-kube-api-access-skpcr\") pod \"e1154a55-d86f-4c56-82d4-4d63c35feceb\" (UID: \"e1154a55-d86f-4c56-82d4-4d63c35feceb\") " Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.892350 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1154a55-d86f-4c56-82d4-4d63c35feceb-kube-api-access-skpcr" (OuterVolumeSpecName: "kube-api-access-skpcr") pod "e1154a55-d86f-4c56-82d4-4d63c35feceb" (UID: "e1154a55-d86f-4c56-82d4-4d63c35feceb"). InnerVolumeSpecName "kube-api-access-skpcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.983009 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-config-data\") pod \"ac66947f-056d-4e83-bcdb-577f72ea0350\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.983112 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bclcg\" (UniqueName: \"kubernetes.io/projected/ac66947f-056d-4e83-bcdb-577f72ea0350-kube-api-access-bclcg\") pod \"ac66947f-056d-4e83-bcdb-577f72ea0350\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.983192 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-combined-ca-bundle\") pod \"ac66947f-056d-4e83-bcdb-577f72ea0350\" (UID: \"ac66947f-056d-4e83-bcdb-577f72ea0350\") " Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.983880 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skpcr\" (UniqueName: \"kubernetes.io/projected/e1154a55-d86f-4c56-82d4-4d63c35feceb-kube-api-access-skpcr\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:18 crc kubenswrapper[4874]: I0217 16:28:18.989370 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac66947f-056d-4e83-bcdb-577f72ea0350-kube-api-access-bclcg" (OuterVolumeSpecName: "kube-api-access-bclcg") pod "ac66947f-056d-4e83-bcdb-577f72ea0350" (UID: "ac66947f-056d-4e83-bcdb-577f72ea0350"). InnerVolumeSpecName "kube-api-access-bclcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.014258 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac66947f-056d-4e83-bcdb-577f72ea0350" (UID: "ac66947f-056d-4e83-bcdb-577f72ea0350"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.058563 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-config-data" (OuterVolumeSpecName: "config-data") pod "ac66947f-056d-4e83-bcdb-577f72ea0350" (UID: "ac66947f-056d-4e83-bcdb-577f72ea0350"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.085826 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.085852 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bclcg\" (UniqueName: \"kubernetes.io/projected/ac66947f-056d-4e83-bcdb-577f72ea0350-kube-api-access-bclcg\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.085864 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac66947f-056d-4e83-bcdb-577f72ea0350-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.683676 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e1154a55-d86f-4c56-82d4-4d63c35feceb","Type":"ContainerDied","Data":"58cf84eae3160566daf9887139d16777e230e2d4df3e92982647837fc586762a"} Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.683824 4874 scope.go:117] "RemoveContainer" containerID="0d58ddd625d4c25c64df1ab80aced90db1da26189e8af0b91a8bc1eedb191b60" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.684020 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.689053 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsb7r" event={"ID":"5bce9f58-6b2c-4af7-8c50-374e27e96f5c","Type":"ContainerStarted","Data":"2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d"} Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.694108 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"ac66947f-056d-4e83-bcdb-577f72ea0350","Type":"ContainerDied","Data":"e0ac6c6b8341e77569fe4b2ccc09c8021e2e3a05618e4cd78919574e89f470d2"} Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.694192 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.728007 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tsb7r" podStartSLOduration=3.134378077 podStartE2EDuration="11.727988545s" podCreationTimestamp="2026-02-17 16:28:08 +0000 UTC" firstStartedPulling="2026-02-17 16:28:10.495380755 +0000 UTC m=+1500.789769316" lastFinishedPulling="2026-02-17 16:28:19.088991213 +0000 UTC m=+1509.383379784" observedRunningTime="2026-02-17 16:28:19.724155471 +0000 UTC m=+1510.018544052" watchObservedRunningTime="2026-02-17 16:28:19.727988545 +0000 UTC m=+1510.022377106" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.742172 4874 scope.go:117] "RemoveContainer" containerID="8ce67f4a16e5fed5934bb6f3a35e742f0ba7dc6cba55c72aaed09323d5622f48" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.757366 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.782184 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.800129 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.825443 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.842412 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:28:19 crc kubenswrapper[4874]: E0217 16:28:19.843124 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1154a55-d86f-4c56-82d4-4d63c35feceb" containerName="kube-state-metrics" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.843145 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1154a55-d86f-4c56-82d4-4d63c35feceb" containerName="kube-state-metrics" Feb 17 16:28:19 crc kubenswrapper[4874]: E0217 16:28:19.843163 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac66947f-056d-4e83-bcdb-577f72ea0350" containerName="mysqld-exporter" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.843170 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac66947f-056d-4e83-bcdb-577f72ea0350" containerName="mysqld-exporter" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.843387 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1154a55-d86f-4c56-82d4-4d63c35feceb" containerName="kube-state-metrics" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.843427 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac66947f-056d-4e83-bcdb-577f72ea0350" containerName="mysqld-exporter" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.844353 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.847496 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.847690 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.856681 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.874100 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.875590 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.878090 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.878195 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 16:28:19 crc kubenswrapper[4874]: I0217 16:28:19.883773 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.011016 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2533da2e-d4db-450e-b6f6-d7bcaca25353-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.011143 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5372d7e-96f7-49b9-84e2-8ef268e00405-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.011182 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2533da2e-d4db-450e-b6f6-d7bcaca25353-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.011240 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlnp\" (UniqueName: \"kubernetes.io/projected/a5372d7e-96f7-49b9-84e2-8ef268e00405-kube-api-access-qmlnp\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.011283 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2533da2e-d4db-450e-b6f6-d7bcaca25353-config-data\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.011390 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5372d7e-96f7-49b9-84e2-8ef268e00405-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.011465 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a5372d7e-96f7-49b9-84e2-8ef268e00405-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.011530 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjvtp\" (UniqueName: \"kubernetes.io/projected/2533da2e-d4db-450e-b6f6-d7bcaca25353-kube-api-access-kjvtp\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.113389 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2533da2e-d4db-450e-b6f6-d7bcaca25353-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.113774 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5372d7e-96f7-49b9-84e2-8ef268e00405-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.113811 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2533da2e-d4db-450e-b6f6-d7bcaca25353-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.113838 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlnp\" (UniqueName: \"kubernetes.io/projected/a5372d7e-96f7-49b9-84e2-8ef268e00405-kube-api-access-qmlnp\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.113881 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2533da2e-d4db-450e-b6f6-d7bcaca25353-config-data\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.114003 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5372d7e-96f7-49b9-84e2-8ef268e00405-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.114090 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a5372d7e-96f7-49b9-84e2-8ef268e00405-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.114149 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjvtp\" (UniqueName: \"kubernetes.io/projected/2533da2e-d4db-450e-b6f6-d7bcaca25353-kube-api-access-kjvtp\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.120024 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2533da2e-d4db-450e-b6f6-d7bcaca25353-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.121895 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2533da2e-d4db-450e-b6f6-d7bcaca25353-config-data\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.122288 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5372d7e-96f7-49b9-84e2-8ef268e00405-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.134961 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5372d7e-96f7-49b9-84e2-8ef268e00405-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.138007 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjvtp\" (UniqueName: \"kubernetes.io/projected/2533da2e-d4db-450e-b6f6-d7bcaca25353-kube-api-access-kjvtp\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.138197 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2533da2e-d4db-450e-b6f6-d7bcaca25353-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2533da2e-d4db-450e-b6f6-d7bcaca25353\") " pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.139797 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a5372d7e-96f7-49b9-84e2-8ef268e00405-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.149269 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlnp\" (UniqueName: \"kubernetes.io/projected/a5372d7e-96f7-49b9-84e2-8ef268e00405-kube-api-access-qmlnp\") pod \"kube-state-metrics-0\" (UID: \"a5372d7e-96f7-49b9-84e2-8ef268e00405\") " pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.164840 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.192140 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.513348 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac66947f-056d-4e83-bcdb-577f72ea0350" path="/var/lib/kubelet/pods/ac66947f-056d-4e83-bcdb-577f72ea0350/volumes" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.593925 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1154a55-d86f-4c56-82d4-4d63c35feceb" path="/var/lib/kubelet/pods/e1154a55-d86f-4c56-82d4-4d63c35feceb/volumes" Feb 17 16:28:20 crc kubenswrapper[4874]: I0217 16:28:20.895607 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:28:21 crc kubenswrapper[4874]: W0217 16:28:21.064769 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5372d7e_96f7_49b9_84e2_8ef268e00405.slice/crio-867eea77f38677335ac2011978470b1cd62a7e4f99b7e7390cc53ec1d5fcdca9 WatchSource:0}: Error finding container 867eea77f38677335ac2011978470b1cd62a7e4f99b7e7390cc53ec1d5fcdca9: Status 404 returned error can't find the container with id 867eea77f38677335ac2011978470b1cd62a7e4f99b7e7390cc53ec1d5fcdca9 Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.066009 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.066453 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="sg-core" containerID="cri-o://8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea" gracePeriod=30 Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.066497 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="ceilometer-notification-agent" containerID="cri-o://f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f" gracePeriod=30 Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.066400 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="ceilometer-central-agent" containerID="cri-o://f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b" gracePeriod=30 Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.066494 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="proxy-httpd" containerID="cri-o://dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b" gracePeriod=30 Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.081487 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.726142 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2533da2e-d4db-450e-b6f6-d7bcaca25353","Type":"ContainerStarted","Data":"85d142481a65ef2df9faa1376c81a479acd663d8b28e97dff03d7fe3885cad05"} Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.736623 4874 generic.go:334] "Generic (PLEG): container finished" podID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerID="dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b" exitCode=0 Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.736657 4874 generic.go:334] "Generic (PLEG): container finished" podID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerID="8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea" exitCode=2 Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.736668 4874 generic.go:334] "Generic (PLEG): container finished" podID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerID="f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b" exitCode=0 Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.736716 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerDied","Data":"dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b"} Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.736746 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerDied","Data":"8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea"} Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.736759 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerDied","Data":"f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b"} Feb 17 16:28:21 crc kubenswrapper[4874]: I0217 16:28:21.754187 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a5372d7e-96f7-49b9-84e2-8ef268e00405","Type":"ContainerStarted","Data":"867eea77f38677335ac2011978470b1cd62a7e4f99b7e7390cc53ec1d5fcdca9"} Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.571837 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.690604 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-log-httpd\") pod \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.690704 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-sg-core-conf-yaml\") pod \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.690761 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzz6w\" (UniqueName: \"kubernetes.io/projected/3513f094-083b-4e5c-a1c9-b59e8c999e8e-kube-api-access-jzz6w\") pod \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.690845 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-scripts\") pod \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.690890 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-combined-ca-bundle\") pod \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.691068 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-run-httpd\") pod \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.691163 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3513f094-083b-4e5c-a1c9-b59e8c999e8e" (UID: "3513f094-083b-4e5c-a1c9-b59e8c999e8e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.691263 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-config-data\") pod \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\" (UID: \"3513f094-083b-4e5c-a1c9-b59e8c999e8e\") " Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.691713 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3513f094-083b-4e5c-a1c9-b59e8c999e8e" (UID: "3513f094-083b-4e5c-a1c9-b59e8c999e8e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.692688 4874 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.692777 4874 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3513f094-083b-4e5c-a1c9-b59e8c999e8e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.697803 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3513f094-083b-4e5c-a1c9-b59e8c999e8e-kube-api-access-jzz6w" (OuterVolumeSpecName: "kube-api-access-jzz6w") pod "3513f094-083b-4e5c-a1c9-b59e8c999e8e" (UID: "3513f094-083b-4e5c-a1c9-b59e8c999e8e"). InnerVolumeSpecName "kube-api-access-jzz6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.698223 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-scripts" (OuterVolumeSpecName: "scripts") pod "3513f094-083b-4e5c-a1c9-b59e8c999e8e" (UID: "3513f094-083b-4e5c-a1c9-b59e8c999e8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.736167 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3513f094-083b-4e5c-a1c9-b59e8c999e8e" (UID: "3513f094-083b-4e5c-a1c9-b59e8c999e8e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.767549 4874 generic.go:334] "Generic (PLEG): container finished" podID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerID="f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f" exitCode=0 Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.767631 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerDied","Data":"f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f"} Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.767667 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3513f094-083b-4e5c-a1c9-b59e8c999e8e","Type":"ContainerDied","Data":"f7760268f105dfc5029331e18e92c879b99aeb203698909b513db24a7aaa36aa"} Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.767691 4874 scope.go:117] "RemoveContainer" containerID="dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.767718 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.771488 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a5372d7e-96f7-49b9-84e2-8ef268e00405","Type":"ContainerStarted","Data":"5c7891a28501396a9eb99f6682c474c39bd6b2d8fe9a7b23f31cc61e9ea62804"} Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.771554 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.777499 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2533da2e-d4db-450e-b6f6-d7bcaca25353","Type":"ContainerStarted","Data":"3f496d7e60df1c10be5c67590d7be21ed2f6890fa7db94f83af11d8245f63272"} Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.800159 4874 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.800184 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzz6w\" (UniqueName: \"kubernetes.io/projected/3513f094-083b-4e5c-a1c9-b59e8c999e8e-kube-api-access-jzz6w\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.800196 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.809256 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3513f094-083b-4e5c-a1c9-b59e8c999e8e" (UID: "3513f094-083b-4e5c-a1c9-b59e8c999e8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.832016 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.383125402 podStartE2EDuration="3.831995162s" podCreationTimestamp="2026-02-17 16:28:19 +0000 UTC" firstStartedPulling="2026-02-17 16:28:21.067626441 +0000 UTC m=+1511.362015002" lastFinishedPulling="2026-02-17 16:28:21.516496191 +0000 UTC m=+1511.810884762" observedRunningTime="2026-02-17 16:28:22.798231308 +0000 UTC m=+1513.092619869" watchObservedRunningTime="2026-02-17 16:28:22.831995162 +0000 UTC m=+1513.126383713" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.841700 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.1197194 podStartE2EDuration="3.841682921s" podCreationTimestamp="2026-02-17 16:28:19 +0000 UTC" firstStartedPulling="2026-02-17 16:28:20.914976513 +0000 UTC m=+1511.209365074" lastFinishedPulling="2026-02-17 16:28:21.636940034 +0000 UTC m=+1511.931328595" observedRunningTime="2026-02-17 16:28:22.817546155 +0000 UTC m=+1513.111934726" watchObservedRunningTime="2026-02-17 16:28:22.841682921 +0000 UTC m=+1513.136071482" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.847486 4874 scope.go:117] "RemoveContainer" containerID="8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.868570 4874 scope.go:117] "RemoveContainer" containerID="f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.893030 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-config-data" (OuterVolumeSpecName: "config-data") pod "3513f094-083b-4e5c-a1c9-b59e8c999e8e" (UID: "3513f094-083b-4e5c-a1c9-b59e8c999e8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.894508 4874 scope.go:117] "RemoveContainer" containerID="f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.904387 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.904419 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3513f094-083b-4e5c-a1c9-b59e8c999e8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.921106 4874 scope.go:117] "RemoveContainer" containerID="dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b" Feb 17 16:28:22 crc kubenswrapper[4874]: E0217 16:28:22.921451 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b\": container with ID starting with dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b not found: ID does not exist" containerID="dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.921502 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b"} err="failed to get container status \"dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b\": rpc error: code = NotFound desc = could not find container \"dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b\": container with ID starting with dba2d39a61440f826eb6f7c538ec2748b2468936e7511e33b3ba92f72174f29b not found: ID does not exist" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.921530 4874 scope.go:117] "RemoveContainer" containerID="8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea" Feb 17 16:28:22 crc kubenswrapper[4874]: E0217 16:28:22.921762 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea\": container with ID starting with 8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea not found: ID does not exist" containerID="8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.921782 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea"} err="failed to get container status \"8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea\": rpc error: code = NotFound desc = could not find container \"8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea\": container with ID starting with 8f6927520eb3817432ca5e8a4a839ab88ffe95a83450ffe29c99742f3ba358ea not found: ID does not exist" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.921793 4874 scope.go:117] "RemoveContainer" containerID="f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f" Feb 17 16:28:22 crc kubenswrapper[4874]: E0217 16:28:22.921968 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f\": container with ID starting with f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f not found: ID does not exist" containerID="f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.921989 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f"} err="failed to get container status \"f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f\": rpc error: code = NotFound desc = could not find container \"f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f\": container with ID starting with f2676d1a2b7cdb56a4aae7e798dfe2304c16b50cb1f8edb42eceda0ae2a52a2f not found: ID does not exist" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.922002 4874 scope.go:117] "RemoveContainer" containerID="f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b" Feb 17 16:28:22 crc kubenswrapper[4874]: E0217 16:28:22.922556 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b\": container with ID starting with f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b not found: ID does not exist" containerID="f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b" Feb 17 16:28:22 crc kubenswrapper[4874]: I0217 16:28:22.922576 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b"} err="failed to get container status \"f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b\": rpc error: code = NotFound desc = could not find container \"f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b\": container with ID starting with f04e262f208b59a14ef61b09c743d0916d53fa2c91d51ac987f871b132b4e83b not found: ID does not exist" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.149567 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.169293 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.179234 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:23 crc kubenswrapper[4874]: E0217 16:28:23.180170 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="sg-core" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.180282 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="sg-core" Feb 17 16:28:23 crc kubenswrapper[4874]: E0217 16:28:23.180389 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="ceilometer-central-agent" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.180463 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="ceilometer-central-agent" Feb 17 16:28:23 crc kubenswrapper[4874]: E0217 16:28:23.180566 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="ceilometer-notification-agent" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.180651 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="ceilometer-notification-agent" Feb 17 16:28:23 crc kubenswrapper[4874]: E0217 16:28:23.180755 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="proxy-httpd" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.180830 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="proxy-httpd" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.181233 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="ceilometer-central-agent" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.181346 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="proxy-httpd" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.181446 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="ceilometer-notification-agent" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.181540 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" containerName="sg-core" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.184616 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.187335 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.187611 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.188988 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.189372 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.331331 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-scripts\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.331661 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-run-httpd\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.331792 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-log-httpd\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.331887 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.332219 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.332305 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjwc\" (UniqueName: \"kubernetes.io/projected/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-kube-api-access-5cjwc\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.332501 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-config-data\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.332633 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.434608 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.434834 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjwc\" (UniqueName: \"kubernetes.io/projected/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-kube-api-access-5cjwc\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.434940 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-config-data\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.435049 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.435213 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-scripts\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.435323 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-run-httpd\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.435406 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.435483 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-log-httpd\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.435853 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-run-httpd\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.435914 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-log-httpd\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.438604 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.439829 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-scripts\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.440384 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.441439 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.448296 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-config-data\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.453679 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjwc\" (UniqueName: \"kubernetes.io/projected/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-kube-api-access-5cjwc\") pod \"ceilometer-0\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " pod="openstack/ceilometer-0" Feb 17 16:28:23 crc kubenswrapper[4874]: I0217 16:28:23.554993 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:28:24 crc kubenswrapper[4874]: W0217 16:28:24.059188 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod992ceb06_ec67_4ab6_b8e3_6223b31f9fc1.slice/crio-030372a103975bae9aa1739b50256b82a8aa81950a29003c9cff6b4430627eb3 WatchSource:0}: Error finding container 030372a103975bae9aa1739b50256b82a8aa81950a29003c9cff6b4430627eb3: Status 404 returned error can't find the container with id 030372a103975bae9aa1739b50256b82a8aa81950a29003c9cff6b4430627eb3 Feb 17 16:28:24 crc kubenswrapper[4874]: I0217 16:28:24.059949 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:24 crc kubenswrapper[4874]: I0217 16:28:24.470039 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3513f094-083b-4e5c-a1c9-b59e8c999e8e" path="/var/lib/kubelet/pods/3513f094-083b-4e5c-a1c9-b59e8c999e8e/volumes" Feb 17 16:28:24 crc kubenswrapper[4874]: I0217 16:28:24.807990 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerStarted","Data":"030372a103975bae9aa1739b50256b82a8aa81950a29003c9cff6b4430627eb3"} Feb 17 16:28:25 crc kubenswrapper[4874]: I0217 16:28:25.818707 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerStarted","Data":"edbf5c7df30b59a4d525aa42a441db193489e4afbabd004f6c611dd193c71459"} Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.512146 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-k5j4f"] Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.540395 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-k5j4f"] Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.575165 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-ddhb8"] Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.577063 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.587696 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-ddhb8"] Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.719105 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/122736d5-78f5-42dc-b6ab-343724bac19d-combined-ca-bundle\") pod \"heat-db-sync-ddhb8\" (UID: \"122736d5-78f5-42dc-b6ab-343724bac19d\") " pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.719513 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/122736d5-78f5-42dc-b6ab-343724bac19d-config-data\") pod \"heat-db-sync-ddhb8\" (UID: \"122736d5-78f5-42dc-b6ab-343724bac19d\") " pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.720138 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtgnh\" (UniqueName: \"kubernetes.io/projected/122736d5-78f5-42dc-b6ab-343724bac19d-kube-api-access-jtgnh\") pod \"heat-db-sync-ddhb8\" (UID: \"122736d5-78f5-42dc-b6ab-343724bac19d\") " pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.821956 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/122736d5-78f5-42dc-b6ab-343724bac19d-config-data\") pod \"heat-db-sync-ddhb8\" (UID: \"122736d5-78f5-42dc-b6ab-343724bac19d\") " pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.822175 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtgnh\" (UniqueName: \"kubernetes.io/projected/122736d5-78f5-42dc-b6ab-343724bac19d-kube-api-access-jtgnh\") pod \"heat-db-sync-ddhb8\" (UID: \"122736d5-78f5-42dc-b6ab-343724bac19d\") " pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.822209 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/122736d5-78f5-42dc-b6ab-343724bac19d-combined-ca-bundle\") pod \"heat-db-sync-ddhb8\" (UID: \"122736d5-78f5-42dc-b6ab-343724bac19d\") " pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.828185 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/122736d5-78f5-42dc-b6ab-343724bac19d-config-data\") pod \"heat-db-sync-ddhb8\" (UID: \"122736d5-78f5-42dc-b6ab-343724bac19d\") " pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.833128 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/122736d5-78f5-42dc-b6ab-343724bac19d-combined-ca-bundle\") pod \"heat-db-sync-ddhb8\" (UID: \"122736d5-78f5-42dc-b6ab-343724bac19d\") " pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.848529 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerStarted","Data":"cbbed2801a5d9657e1d9b6bb9fe07d6ab357f2f9ac3c1aa0ab0c717fc33da70d"} Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.848606 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerStarted","Data":"8466cf19bade4617fd5310df02460a99d20b62064b5266a9ba2a0781247bbdc0"} Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.859974 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtgnh\" (UniqueName: \"kubernetes.io/projected/122736d5-78f5-42dc-b6ab-343724bac19d-kube-api-access-jtgnh\") pod \"heat-db-sync-ddhb8\" (UID: \"122736d5-78f5-42dc-b6ab-343724bac19d\") " pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:26 crc kubenswrapper[4874]: I0217 16:28:26.911738 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ddhb8" Feb 17 16:28:27 crc kubenswrapper[4874]: W0217 16:28:27.476469 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod122736d5_78f5_42dc_b6ab_343724bac19d.slice/crio-1a36d9598f1fcc6f862493e387fe864576fc1de966016eef427f0ec0ab25a5f3 WatchSource:0}: Error finding container 1a36d9598f1fcc6f862493e387fe864576fc1de966016eef427f0ec0ab25a5f3: Status 404 returned error can't find the container with id 1a36d9598f1fcc6f862493e387fe864576fc1de966016eef427f0ec0ab25a5f3 Feb 17 16:28:27 crc kubenswrapper[4874]: I0217 16:28:27.480509 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-ddhb8"] Feb 17 16:28:27 crc kubenswrapper[4874]: E0217 16:28:27.602018 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:28:27 crc kubenswrapper[4874]: E0217 16:28:27.602090 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:28:27 crc kubenswrapper[4874]: E0217 16:28:27.602214 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:27 crc kubenswrapper[4874]: E0217 16:28:27.603545 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:28:27 crc kubenswrapper[4874]: I0217 16:28:27.861466 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ddhb8" event={"ID":"122736d5-78f5-42dc-b6ab-343724bac19d","Type":"ContainerStarted","Data":"1a36d9598f1fcc6f862493e387fe864576fc1de966016eef427f0ec0ab25a5f3"} Feb 17 16:28:27 crc kubenswrapper[4874]: E0217 16:28:27.862883 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:28:28 crc kubenswrapper[4874]: I0217 16:28:28.470817 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96118c9a-6b15-48a8-b6d9-a2146dc0182c" path="/var/lib/kubelet/pods/96118c9a-6b15-48a8-b6d9-a2146dc0182c/volumes" Feb 17 16:28:28 crc kubenswrapper[4874]: I0217 16:28:28.876822 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerStarted","Data":"c35af0e6593545414dbfd54f9ad9a5e49d51c8d7b4222b99547ca90aa0b1201c"} Feb 17 16:28:28 crc kubenswrapper[4874]: I0217 16:28:28.876869 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:28:28 crc kubenswrapper[4874]: E0217 16:28:28.878437 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:28:28 crc kubenswrapper[4874]: I0217 16:28:28.906072 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.207026955 podStartE2EDuration="5.906051688s" podCreationTimestamp="2026-02-17 16:28:23 +0000 UTC" firstStartedPulling="2026-02-17 16:28:24.062787712 +0000 UTC m=+1514.357176273" lastFinishedPulling="2026-02-17 16:28:27.761812445 +0000 UTC m=+1518.056201006" observedRunningTime="2026-02-17 16:28:28.89479082 +0000 UTC m=+1519.189179401" watchObservedRunningTime="2026-02-17 16:28:28.906051688 +0000 UTC m=+1519.200440249" Feb 17 16:28:29 crc kubenswrapper[4874]: I0217 16:28:29.073638 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:29 crc kubenswrapper[4874]: I0217 16:28:29.073915 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:28:29 crc kubenswrapper[4874]: I0217 16:28:29.847920 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:30 crc kubenswrapper[4874]: I0217 16:28:30.079051 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:28:30 crc kubenswrapper[4874]: I0217 16:28:30.133618 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tsb7r" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="registry-server" probeResult="failure" output=< Feb 17 16:28:30 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:28:30 crc kubenswrapper[4874]: > Feb 17 16:28:30 crc kubenswrapper[4874]: I0217 16:28:30.229424 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:28:30 crc kubenswrapper[4874]: I0217 16:28:30.421204 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:28:30 crc kubenswrapper[4874]: I0217 16:28:30.897018 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="ceilometer-central-agent" containerID="cri-o://edbf5c7df30b59a4d525aa42a441db193489e4afbabd004f6c611dd193c71459" gracePeriod=30 Feb 17 16:28:30 crc kubenswrapper[4874]: I0217 16:28:30.897039 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="sg-core" containerID="cri-o://cbbed2801a5d9657e1d9b6bb9fe07d6ab357f2f9ac3c1aa0ab0c717fc33da70d" gracePeriod=30 Feb 17 16:28:30 crc kubenswrapper[4874]: I0217 16:28:30.897039 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="proxy-httpd" containerID="cri-o://c35af0e6593545414dbfd54f9ad9a5e49d51c8d7b4222b99547ca90aa0b1201c" gracePeriod=30 Feb 17 16:28:30 crc kubenswrapper[4874]: I0217 16:28:30.897072 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="ceilometer-notification-agent" containerID="cri-o://8466cf19bade4617fd5310df02460a99d20b62064b5266a9ba2a0781247bbdc0" gracePeriod=30 Feb 17 16:28:31 crc kubenswrapper[4874]: I0217 16:28:31.909430 4874 generic.go:334] "Generic (PLEG): container finished" podID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerID="c35af0e6593545414dbfd54f9ad9a5e49d51c8d7b4222b99547ca90aa0b1201c" exitCode=0 Feb 17 16:28:31 crc kubenswrapper[4874]: I0217 16:28:31.909648 4874 generic.go:334] "Generic (PLEG): container finished" podID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerID="cbbed2801a5d9657e1d9b6bb9fe07d6ab357f2f9ac3c1aa0ab0c717fc33da70d" exitCode=2 Feb 17 16:28:31 crc kubenswrapper[4874]: I0217 16:28:31.909660 4874 generic.go:334] "Generic (PLEG): container finished" podID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerID="8466cf19bade4617fd5310df02460a99d20b62064b5266a9ba2a0781247bbdc0" exitCode=0 Feb 17 16:28:31 crc kubenswrapper[4874]: I0217 16:28:31.909581 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerDied","Data":"c35af0e6593545414dbfd54f9ad9a5e49d51c8d7b4222b99547ca90aa0b1201c"} Feb 17 16:28:31 crc kubenswrapper[4874]: I0217 16:28:31.909697 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerDied","Data":"cbbed2801a5d9657e1d9b6bb9fe07d6ab357f2f9ac3c1aa0ab0c717fc33da70d"} Feb 17 16:28:31 crc kubenswrapper[4874]: I0217 16:28:31.909712 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerDied","Data":"8466cf19bade4617fd5310df02460a99d20b62064b5266a9ba2a0781247bbdc0"} Feb 17 16:28:32 crc kubenswrapper[4874]: I0217 16:28:32.925847 4874 generic.go:334] "Generic (PLEG): container finished" podID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerID="edbf5c7df30b59a4d525aa42a441db193489e4afbabd004f6c611dd193c71459" exitCode=0 Feb 17 16:28:32 crc kubenswrapper[4874]: I0217 16:28:32.926278 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerDied","Data":"edbf5c7df30b59a4d525aa42a441db193489e4afbabd004f6c611dd193c71459"} Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.319198 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.482387 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-log-httpd\") pod \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.482763 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-combined-ca-bundle\") pod \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.482925 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-ceilometer-tls-certs\") pod \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.483169 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-sg-core-conf-yaml\") pod \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.483252 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-run-httpd\") pod \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.483385 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-scripts\") pod \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.482844 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" (UID: "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.483482 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-config-data\") pod \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.483657 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" (UID: "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.483795 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cjwc\" (UniqueName: \"kubernetes.io/projected/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-kube-api-access-5cjwc\") pod \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\" (UID: \"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1\") " Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.484840 4874 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.485254 4874 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.500428 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-scripts" (OuterVolumeSpecName: "scripts") pod "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" (UID: "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.517162 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-kube-api-access-5cjwc" (OuterVolumeSpecName: "kube-api-access-5cjwc") pod "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" (UID: "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1"). InnerVolumeSpecName "kube-api-access-5cjwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.548188 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" (UID: "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.593507 4874 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.593547 4874 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.593560 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cjwc\" (UniqueName: \"kubernetes.io/projected/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-kube-api-access-5cjwc\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.635932 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" (UID: "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.676587 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" (UID: "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.684947 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-config-data" (OuterVolumeSpecName: "config-data") pod "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" (UID: "992ceb06-ec67-4ab6-b8e3-6223b31f9fc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.696256 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.696293 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.696309 4874 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.942579 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"992ceb06-ec67-4ab6-b8e3-6223b31f9fc1","Type":"ContainerDied","Data":"030372a103975bae9aa1739b50256b82a8aa81950a29003c9cff6b4430627eb3"} Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.942647 4874 scope.go:117] "RemoveContainer" containerID="c35af0e6593545414dbfd54f9ad9a5e49d51c8d7b4222b99547ca90aa0b1201c" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.942690 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.965705 4874 scope.go:117] "RemoveContainer" containerID="cbbed2801a5d9657e1d9b6bb9fe07d6ab357f2f9ac3c1aa0ab0c717fc33da70d" Feb 17 16:28:33 crc kubenswrapper[4874]: I0217 16:28:33.995267 4874 scope.go:117] "RemoveContainer" containerID="8466cf19bade4617fd5310df02460a99d20b62064b5266a9ba2a0781247bbdc0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.012056 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.037997 4874 scope.go:117] "RemoveContainer" containerID="edbf5c7df30b59a4d525aa42a441db193489e4afbabd004f6c611dd193c71459" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.043220 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.059565 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:34 crc kubenswrapper[4874]: E0217 16:28:34.060063 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="ceilometer-notification-agent" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.060089 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="ceilometer-notification-agent" Feb 17 16:28:34 crc kubenswrapper[4874]: E0217 16:28:34.060110 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="proxy-httpd" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.060116 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="proxy-httpd" Feb 17 16:28:34 crc kubenswrapper[4874]: E0217 16:28:34.060131 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="sg-core" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.060137 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="sg-core" Feb 17 16:28:34 crc kubenswrapper[4874]: E0217 16:28:34.060158 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="ceilometer-central-agent" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.060164 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="ceilometer-central-agent" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.060409 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="ceilometer-notification-agent" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.060418 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="proxy-httpd" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.060431 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="sg-core" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.060443 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" containerName="ceilometer-central-agent" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.062442 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.065927 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.066146 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.068520 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.073233 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.217048 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-config-data\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.217128 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.217254 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc29c300-b515-47d8-9326-1839ed7772b4-run-httpd\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.217507 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.217569 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-scripts\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.217705 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc29c300-b515-47d8-9326-1839ed7772b4-log-httpd\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.217812 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zkr\" (UniqueName: \"kubernetes.io/projected/cc29c300-b515-47d8-9326-1839ed7772b4-kube-api-access-z6zkr\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.217857 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.320218 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.320571 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-scripts\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.320689 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc29c300-b515-47d8-9326-1839ed7772b4-log-httpd\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.320773 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6zkr\" (UniqueName: \"kubernetes.io/projected/cc29c300-b515-47d8-9326-1839ed7772b4-kube-api-access-z6zkr\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.321190 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.321360 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc29c300-b515-47d8-9326-1839ed7772b4-log-httpd\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.321626 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-config-data\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.321698 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.321875 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc29c300-b515-47d8-9326-1839ed7772b4-run-httpd\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.322217 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc29c300-b515-47d8-9326-1839ed7772b4-run-httpd\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.324345 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.325773 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-config-data\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.326520 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-scripts\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.326697 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.328257 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc29c300-b515-47d8-9326-1839ed7772b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.345533 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6zkr\" (UniqueName: \"kubernetes.io/projected/cc29c300-b515-47d8-9326-1839ed7772b4-kube-api-access-z6zkr\") pod \"ceilometer-0\" (UID: \"cc29c300-b515-47d8-9326-1839ed7772b4\") " pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.390948 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.473580 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992ceb06-ec67-4ab6-b8e3-6223b31f9fc1" path="/var/lib/kubelet/pods/992ceb06-ec67-4ab6-b8e3-6223b31f9fc1/volumes" Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.966941 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc29c300-b515-47d8-9326-1839ed7772b4","Type":"ContainerStarted","Data":"9bc29814971172c0b2b36ee76c70a6139e7ace1cae86bbd717830439814dca18"} Feb 17 16:28:34 crc kubenswrapper[4874]: I0217 16:28:34.971795 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:28:35 crc kubenswrapper[4874]: E0217 16:28:35.077958 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:35 crc kubenswrapper[4874]: E0217 16:28:35.078012 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:35 crc kubenswrapper[4874]: E0217 16:28:35.078157 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:35 crc kubenswrapper[4874]: I0217 16:28:35.245631 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="ed7dc41e-9863-4c74-8675-56fca22db08a" containerName="rabbitmq" containerID="cri-o://1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b" gracePeriod=604795 Feb 17 16:28:35 crc kubenswrapper[4874]: I0217 16:28:35.489220 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="476813ee-f26a-4068-a5e9-87b5a20fece5" containerName="rabbitmq" containerID="cri-o://db19969ed07d23d3e402cc5f7b337eabe216ce1076931c2f88d450fecfb27ff6" gracePeriod=604795 Feb 17 16:28:36 crc kubenswrapper[4874]: I0217 16:28:36.994715 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc29c300-b515-47d8-9326-1839ed7772b4","Type":"ContainerStarted","Data":"605f560b4d5175536a98df25b0b190bd58c5eccd7520567fcb5f10bdf4e024ca"} Feb 17 16:28:38 crc kubenswrapper[4874]: I0217 16:28:38.008922 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc29c300-b515-47d8-9326-1839ed7772b4","Type":"ContainerStarted","Data":"3097cc8e758e2b0cc1cf3fcb66dbc327474d681320e5af4d33fc7f6f8a02ed17"} Feb 17 16:28:38 crc kubenswrapper[4874]: E0217 16:28:38.886714 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:28:39 crc kubenswrapper[4874]: I0217 16:28:39.021914 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc29c300-b515-47d8-9326-1839ed7772b4","Type":"ContainerStarted","Data":"87fd14e68f5d9b7dbefec34457bd860b2a6961bd64c2e5cad6c42f290056af63"} Feb 17 16:28:39 crc kubenswrapper[4874]: I0217 16:28:39.022171 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:28:39 crc kubenswrapper[4874]: E0217 16:28:39.023934 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:28:40 crc kubenswrapper[4874]: E0217 16:28:40.041638 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:28:40 crc kubenswrapper[4874]: I0217 16:28:40.119399 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tsb7r" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="registry-server" probeResult="failure" output=< Feb 17 16:28:40 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:28:40 crc kubenswrapper[4874]: > Feb 17 16:28:40 crc kubenswrapper[4874]: E0217 16:28:40.585420 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:28:40 crc kubenswrapper[4874]: E0217 16:28:40.585801 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:28:40 crc kubenswrapper[4874]: E0217 16:28:40.586389 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:40 crc kubenswrapper[4874]: E0217 16:28:40.588018 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.012429 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.103442 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed7dc41e-9863-4c74-8675-56fca22db08a-erlang-cookie-secret\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.103489 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-plugins\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.103671 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvdzx\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-kube-api-access-fvdzx\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.103700 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-plugins-conf\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.103723 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-confd\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.103755 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed7dc41e-9863-4c74-8675-56fca22db08a-pod-info\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.104596 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.104643 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-config-data\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.104683 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-tls\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.104711 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-erlang-cookie\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.104851 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-server-conf\") pod \"ed7dc41e-9863-4c74-8675-56fca22db08a\" (UID: \"ed7dc41e-9863-4c74-8675-56fca22db08a\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.107204 4874 generic.go:334] "Generic (PLEG): container finished" podID="ed7dc41e-9863-4c74-8675-56fca22db08a" containerID="1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b" exitCode=0 Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.107303 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ed7dc41e-9863-4c74-8675-56fca22db08a","Type":"ContainerDied","Data":"1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b"} Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.107336 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ed7dc41e-9863-4c74-8675-56fca22db08a","Type":"ContainerDied","Data":"c5f6eb58ac15341f65c8eea915b672221075195deb7745785b2b1d1f2945447d"} Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.107356 4874 scope.go:117] "RemoveContainer" containerID="1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.107529 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.109165 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.115565 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.117920 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.118694 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ed7dc41e-9863-4c74-8675-56fca22db08a-pod-info" (OuterVolumeSpecName: "pod-info") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.119194 4874 generic.go:334] "Generic (PLEG): container finished" podID="476813ee-f26a-4068-a5e9-87b5a20fece5" containerID="db19969ed07d23d3e402cc5f7b337eabe216ce1076931c2f88d450fecfb27ff6" exitCode=0 Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.119269 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"476813ee-f26a-4068-a5e9-87b5a20fece5","Type":"ContainerDied","Data":"db19969ed07d23d3e402cc5f7b337eabe216ce1076931c2f88d450fecfb27ff6"} Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.123722 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed7dc41e-9863-4c74-8675-56fca22db08a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.125959 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-kube-api-access-fvdzx" (OuterVolumeSpecName: "kube-api-access-fvdzx") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "kube-api-access-fvdzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.127338 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.133683 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608" (OuterVolumeSpecName: "persistence") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.182539 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-config-data" (OuterVolumeSpecName: "config-data") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.209059 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvdzx\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-kube-api-access-fvdzx\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.209096 4874 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.209106 4874 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ed7dc41e-9863-4c74-8675-56fca22db08a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.209133 4874 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") on node \"crc\" " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.209144 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.209153 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.209162 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.209173 4874 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ed7dc41e-9863-4c74-8675-56fca22db08a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.209181 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.229659 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-server-conf" (OuterVolumeSpecName: "server-conf") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.238465 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.300424 4874 scope.go:117] "RemoveContainer" containerID="b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.303597 4874 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.303797 4874 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608") on node "crc" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.308678 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ed7dc41e-9863-4c74-8675-56fca22db08a" (UID: "ed7dc41e-9863-4c74-8675-56fca22db08a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.321731 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-server-conf\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.321806 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/476813ee-f26a-4068-a5e9-87b5a20fece5-erlang-cookie-secret\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.321840 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/476813ee-f26a-4068-a5e9-87b5a20fece5-pod-info\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.321873 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-config-data\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.321954 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-confd\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.322057 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdr22\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-kube-api-access-qdr22\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.322115 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-tls\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.322205 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-erlang-cookie\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.322250 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-plugins\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.322294 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-plugins-conf\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.331716 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") pod \"476813ee-f26a-4068-a5e9-87b5a20fece5\" (UID: \"476813ee-f26a-4068-a5e9-87b5a20fece5\") " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.333150 4874 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ed7dc41e-9863-4c74-8675-56fca22db08a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.333183 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ed7dc41e-9863-4c74-8675-56fca22db08a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.333200 4874 reconciler_common.go:293] "Volume detached for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.333148 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.335925 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.339726 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.380527 4874 scope.go:117] "RemoveContainer" containerID="1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.380704 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-kube-api-access-qdr22" (OuterVolumeSpecName: "kube-api-access-qdr22") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "kube-api-access-qdr22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.380723 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/476813ee-f26a-4068-a5e9-87b5a20fece5-pod-info" (OuterVolumeSpecName: "pod-info") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.382468 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.385049 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/476813ee-f26a-4068-a5e9-87b5a20fece5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: E0217 16:28:42.394302 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b\": container with ID starting with 1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b not found: ID does not exist" containerID="1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.394357 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b"} err="failed to get container status \"1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b\": rpc error: code = NotFound desc = could not find container \"1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b\": container with ID starting with 1972bf3236984f49e02049edbedc42d4da80f95fa6301b9653139c5fe969a12b not found: ID does not exist" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.394389 4874 scope.go:117] "RemoveContainer" containerID="b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c" Feb 17 16:28:42 crc kubenswrapper[4874]: E0217 16:28:42.396055 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c\": container with ID starting with b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c not found: ID does not exist" containerID="b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.396538 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c"} err="failed to get container status \"b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c\": rpc error: code = NotFound desc = could not find container \"b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c\": container with ID starting with b3ac77bc86427c90e262709afcc20a0f4309054628175d07990407541cb4f97c not found: ID does not exist" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.413811 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-config-data" (OuterVolumeSpecName: "config-data") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.460282 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdr22\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-kube-api-access-qdr22\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.460313 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.460512 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.460526 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.460783 4874 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.460799 4874 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/476813ee-f26a-4068-a5e9-87b5a20fece5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.460818 4874 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/476813ee-f26a-4068-a5e9-87b5a20fece5-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.460828 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.497821 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-server-conf" (OuterVolumeSpecName: "server-conf") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.594354 4874 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/476813ee-f26a-4068-a5e9-87b5a20fece5-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.690351 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe" (OuterVolumeSpecName: "persistence") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.701985 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "476813ee-f26a-4068-a5e9-87b5a20fece5" (UID: "476813ee-f26a-4068-a5e9-87b5a20fece5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.704002 4874 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") on node \"crc\" " Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.704036 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/476813ee-f26a-4068-a5e9-87b5a20fece5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.714223 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.742710 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.768235 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:28:42 crc kubenswrapper[4874]: E0217 16:28:42.768830 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7dc41e-9863-4c74-8675-56fca22db08a" containerName="rabbitmq" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.768853 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7dc41e-9863-4c74-8675-56fca22db08a" containerName="rabbitmq" Feb 17 16:28:42 crc kubenswrapper[4874]: E0217 16:28:42.768865 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7dc41e-9863-4c74-8675-56fca22db08a" containerName="setup-container" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.768873 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7dc41e-9863-4c74-8675-56fca22db08a" containerName="setup-container" Feb 17 16:28:42 crc kubenswrapper[4874]: E0217 16:28:42.768910 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476813ee-f26a-4068-a5e9-87b5a20fece5" containerName="rabbitmq" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.768918 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="476813ee-f26a-4068-a5e9-87b5a20fece5" containerName="rabbitmq" Feb 17 16:28:42 crc kubenswrapper[4874]: E0217 16:28:42.768951 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476813ee-f26a-4068-a5e9-87b5a20fece5" containerName="setup-container" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.768959 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="476813ee-f26a-4068-a5e9-87b5a20fece5" containerName="setup-container" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.769228 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7dc41e-9863-4c74-8675-56fca22db08a" containerName="rabbitmq" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.769261 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="476813ee-f26a-4068-a5e9-87b5a20fece5" containerName="rabbitmq" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.770745 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.790100 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.804346 4874 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.804795 4874 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe") on node "crc" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.806812 4874 reconciler_common.go:293] "Volume detached for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.908647 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/850560f1-d14c-45d2-9526-e7aa266d3427-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.908701 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.908733 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.908760 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.908845 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/850560f1-d14c-45d2-9526-e7aa266d3427-server-conf\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.908871 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.908908 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/850560f1-d14c-45d2-9526-e7aa266d3427-pod-info\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.908978 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/850560f1-d14c-45d2-9526-e7aa266d3427-config-data\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.909036 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n65h7\" (UniqueName: \"kubernetes.io/projected/850560f1-d14c-45d2-9526-e7aa266d3427-kube-api-access-n65h7\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.909066 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/850560f1-d14c-45d2-9526-e7aa266d3427-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:42 crc kubenswrapper[4874]: I0217 16:28:42.909133 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.010723 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.010821 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/850560f1-d14c-45d2-9526-e7aa266d3427-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.010849 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.010874 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.010897 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.010957 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/850560f1-d14c-45d2-9526-e7aa266d3427-server-conf\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.010982 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.011018 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/850560f1-d14c-45d2-9526-e7aa266d3427-pod-info\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.011104 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/850560f1-d14c-45d2-9526-e7aa266d3427-config-data\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.011160 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n65h7\" (UniqueName: \"kubernetes.io/projected/850560f1-d14c-45d2-9526-e7aa266d3427-kube-api-access-n65h7\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.011184 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/850560f1-d14c-45d2-9526-e7aa266d3427-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.011460 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.011491 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.012167 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/850560f1-d14c-45d2-9526-e7aa266d3427-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.012257 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/850560f1-d14c-45d2-9526-e7aa266d3427-config-data\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.012647 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/850560f1-d14c-45d2-9526-e7aa266d3427-server-conf\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.015361 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.015394 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cd2ed7939a07d83111643c672ec8331054a35fd031224fadb7579e462a845591/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.015870 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.016193 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/850560f1-d14c-45d2-9526-e7aa266d3427-pod-info\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.017337 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/850560f1-d14c-45d2-9526-e7aa266d3427-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.017732 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/850560f1-d14c-45d2-9526-e7aa266d3427-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.040154 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n65h7\" (UniqueName: \"kubernetes.io/projected/850560f1-d14c-45d2-9526-e7aa266d3427-kube-api-access-n65h7\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.082297 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e4eece0-fa2e-4553-9b0d-b0622841f608\") pod \"rabbitmq-server-2\" (UID: \"850560f1-d14c-45d2-9526-e7aa266d3427\") " pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.113302 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.132758 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"476813ee-f26a-4068-a5e9-87b5a20fece5","Type":"ContainerDied","Data":"0b5d0c2ebe5cd9bb260c9898ef7e9a25e1d7f87345021cc7434e62138aa39678"} Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.132811 4874 scope.go:117] "RemoveContainer" containerID="db19969ed07d23d3e402cc5f7b337eabe216ce1076931c2f88d450fecfb27ff6" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.132918 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.168500 4874 scope.go:117] "RemoveContainer" containerID="d0a20d9d2bae0c7e825b68fea651ef557c11736643886e7d9fc0aae9bd75ea87" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.181107 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.191150 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.266184 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.268767 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.271965 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.274665 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.277509 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.277871 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.278066 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vchwc" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.278241 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.279026 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.308413 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.427488 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.427558 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.427626 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efb51498-72fd-4e39-8bdd-dda0b1abe44a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.427657 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efb51498-72fd-4e39-8bdd-dda0b1abe44a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.427700 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efb51498-72fd-4e39-8bdd-dda0b1abe44a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.427760 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.427831 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efb51498-72fd-4e39-8bdd-dda0b1abe44a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.427864 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efb51498-72fd-4e39-8bdd-dda0b1abe44a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.427899 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fss5f\" (UniqueName: \"kubernetes.io/projected/efb51498-72fd-4e39-8bdd-dda0b1abe44a-kube-api-access-fss5f\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.428143 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.428227 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530277 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530401 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530505 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530540 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530609 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efb51498-72fd-4e39-8bdd-dda0b1abe44a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530635 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efb51498-72fd-4e39-8bdd-dda0b1abe44a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530696 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efb51498-72fd-4e39-8bdd-dda0b1abe44a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530771 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530867 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530879 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efb51498-72fd-4e39-8bdd-dda0b1abe44a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.530982 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efb51498-72fd-4e39-8bdd-dda0b1abe44a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.531629 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/efb51498-72fd-4e39-8bdd-dda0b1abe44a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.532046 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/efb51498-72fd-4e39-8bdd-dda0b1abe44a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.532259 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fss5f\" (UniqueName: \"kubernetes.io/projected/efb51498-72fd-4e39-8bdd-dda0b1abe44a-kube-api-access-fss5f\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.535374 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.537557 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.538919 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/efb51498-72fd-4e39-8bdd-dda0b1abe44a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.539492 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/efb51498-72fd-4e39-8bdd-dda0b1abe44a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.540738 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.541337 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6398513e7d6802ecff0c7960070d40c948d940184ed62b9347789a83b447027a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.545861 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/efb51498-72fd-4e39-8bdd-dda0b1abe44a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.552148 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/efb51498-72fd-4e39-8bdd-dda0b1abe44a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.576796 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fss5f\" (UniqueName: \"kubernetes.io/projected/efb51498-72fd-4e39-8bdd-dda0b1abe44a-kube-api-access-fss5f\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.606801 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9a126d5-9bee-465f-9618-3e77c8b2ecfe\") pod \"rabbitmq-cell1-server-0\" (UID: \"efb51498-72fd-4e39-8bdd-dda0b1abe44a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.677197 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:28:43 crc kubenswrapper[4874]: W0217 16:28:43.680589 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod850560f1_d14c_45d2_9526_e7aa266d3427.slice/crio-eb34757107a62406f8c14945abb73d47eb099f7b0e0db8b7c828f84e856d6ece WatchSource:0}: Error finding container eb34757107a62406f8c14945abb73d47eb099f7b0e0db8b7c828f84e856d6ece: Status 404 returned error can't find the container with id eb34757107a62406f8c14945abb73d47eb099f7b0e0db8b7c828f84e856d6ece Feb 17 16:28:43 crc kubenswrapper[4874]: I0217 16:28:43.899674 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:44 crc kubenswrapper[4874]: I0217 16:28:44.147045 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"850560f1-d14c-45d2-9526-e7aa266d3427","Type":"ContainerStarted","Data":"eb34757107a62406f8c14945abb73d47eb099f7b0e0db8b7c828f84e856d6ece"} Feb 17 16:28:44 crc kubenswrapper[4874]: W0217 16:28:44.420474 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefb51498_72fd_4e39_8bdd_dda0b1abe44a.slice/crio-432fd56b6fa8d4832a280d5682dbfb917fb374c54c7f6c54a12cd048bb34d6b6 WatchSource:0}: Error finding container 432fd56b6fa8d4832a280d5682dbfb917fb374c54c7f6c54a12cd048bb34d6b6: Status 404 returned error can't find the container with id 432fd56b6fa8d4832a280d5682dbfb917fb374c54c7f6c54a12cd048bb34d6b6 Feb 17 16:28:44 crc kubenswrapper[4874]: I0217 16:28:44.423950 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:28:44 crc kubenswrapper[4874]: I0217 16:28:44.480307 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476813ee-f26a-4068-a5e9-87b5a20fece5" path="/var/lib/kubelet/pods/476813ee-f26a-4068-a5e9-87b5a20fece5/volumes" Feb 17 16:28:44 crc kubenswrapper[4874]: I0217 16:28:44.481403 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed7dc41e-9863-4c74-8675-56fca22db08a" path="/var/lib/kubelet/pods/ed7dc41e-9863-4c74-8675-56fca22db08a/volumes" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.157759 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"efb51498-72fd-4e39-8bdd-dda0b1abe44a","Type":"ContainerStarted","Data":"432fd56b6fa8d4832a280d5682dbfb917fb374c54c7f6c54a12cd048bb34d6b6"} Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.314713 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-rht8r"] Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.317115 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.319969 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.332472 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-rht8r"] Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.380582 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-config\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.380918 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.380977 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.381047 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.381275 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnjsn\" (UniqueName: \"kubernetes.io/projected/dbcbc20c-0642-45c3-a518-94127296de34-kube-api-access-bnjsn\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.381415 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.381493 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.483645 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.484171 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-config\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.484399 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.484459 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.484514 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.484554 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnjsn\" (UniqueName: \"kubernetes.io/projected/dbcbc20c-0642-45c3-a518-94127296de34-kube-api-access-bnjsn\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.484659 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.485168 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.485220 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.485245 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-config\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.485460 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.485689 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.486222 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.515609 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnjsn\" (UniqueName: \"kubernetes.io/projected/dbcbc20c-0642-45c3-a518-94127296de34-kube-api-access-bnjsn\") pod \"dnsmasq-dns-5b75489c6f-rht8r\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:45 crc kubenswrapper[4874]: I0217 16:28:45.643692 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:46 crc kubenswrapper[4874]: I0217 16:28:46.138500 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-rht8r"] Feb 17 16:28:46 crc kubenswrapper[4874]: I0217 16:28:46.170687 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"850560f1-d14c-45d2-9526-e7aa266d3427","Type":"ContainerStarted","Data":"ea7e9d03619473f3e9d9dabb419f2ee4b6969876ac0d9729d3c66f759d848789"} Feb 17 16:28:46 crc kubenswrapper[4874]: I0217 16:28:46.173535 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" event={"ID":"dbcbc20c-0642-45c3-a518-94127296de34","Type":"ContainerStarted","Data":"f801c5bb45a4fe442a82d6a3802e819243fdbb6dc5a6ab7aabe58e5e92dcc14b"} Feb 17 16:28:47 crc kubenswrapper[4874]: I0217 16:28:47.187553 4874 generic.go:334] "Generic (PLEG): container finished" podID="dbcbc20c-0642-45c3-a518-94127296de34" containerID="bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2" exitCode=0 Feb 17 16:28:47 crc kubenswrapper[4874]: I0217 16:28:47.187616 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" event={"ID":"dbcbc20c-0642-45c3-a518-94127296de34","Type":"ContainerDied","Data":"bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2"} Feb 17 16:28:47 crc kubenswrapper[4874]: I0217 16:28:47.190457 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"efb51498-72fd-4e39-8bdd-dda0b1abe44a","Type":"ContainerStarted","Data":"dc24a7186281417bc3e7878186ae852155495c062c7b6221e59aad3397f8bd15"} Feb 17 16:28:48 crc kubenswrapper[4874]: I0217 16:28:48.202058 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" event={"ID":"dbcbc20c-0642-45c3-a518-94127296de34","Type":"ContainerStarted","Data":"0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9"} Feb 17 16:28:48 crc kubenswrapper[4874]: I0217 16:28:48.225474 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" podStartSLOduration=3.225450222 podStartE2EDuration="3.225450222s" podCreationTimestamp="2026-02-17 16:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:28:48.216342847 +0000 UTC m=+1538.510731428" watchObservedRunningTime="2026-02-17 16:28:48.225450222 +0000 UTC m=+1538.519838783" Feb 17 16:28:49 crc kubenswrapper[4874]: I0217 16:28:49.212318 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:50 crc kubenswrapper[4874]: I0217 16:28:50.140778 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tsb7r" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="registry-server" probeResult="failure" output=< Feb 17 16:28:50 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:28:50 crc kubenswrapper[4874]: > Feb 17 16:28:54 crc kubenswrapper[4874]: E0217 16:28:54.463338 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.396062 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l9628"] Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.399127 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.413787 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9628"] Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.486098 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:28:55 crc kubenswrapper[4874]: E0217 16:28:55.590432 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:55 crc kubenswrapper[4874]: E0217 16:28:55.590497 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:55 crc kubenswrapper[4874]: E0217 16:28:55.590633 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:55 crc kubenswrapper[4874]: E0217 16:28:55.591971 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.597016 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-catalog-content\") pod \"redhat-marketplace-l9628\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.597197 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-utilities\") pod \"redhat-marketplace-l9628\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.597573 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8hf\" (UniqueName: \"kubernetes.io/projected/951dfd00-4b6d-4405-bef4-eac337033cb1-kube-api-access-ln8hf\") pod \"redhat-marketplace-l9628\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.646011 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.701562 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-catalog-content\") pod \"redhat-marketplace-l9628\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.701775 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-utilities\") pod \"redhat-marketplace-l9628\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.702039 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-catalog-content\") pod \"redhat-marketplace-l9628\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.702374 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-utilities\") pod \"redhat-marketplace-l9628\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.702850 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8hf\" (UniqueName: \"kubernetes.io/projected/951dfd00-4b6d-4405-bef4-eac337033cb1-kube-api-access-ln8hf\") pod \"redhat-marketplace-l9628\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.734412 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-srfbf"] Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.734847 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" podUID="440002d4-28a6-4e11-b188-1921f660e282" containerName="dnsmasq-dns" containerID="cri-o://c99e5edddda76210735031ccc2041266f5be1c827131f843824f39b0a51791ad" gracePeriod=10 Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.748845 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8hf\" (UniqueName: \"kubernetes.io/projected/951dfd00-4b6d-4405-bef4-eac337033cb1-kube-api-access-ln8hf\") pod \"redhat-marketplace-l9628\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:55 crc kubenswrapper[4874]: I0217 16:28:55.989255 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-4tg48"] Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:55.991365 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.016162 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-4tg48"] Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.029622 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-config\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.029736 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.029822 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4wl\" (UniqueName: \"kubernetes.io/projected/e1e1acdf-f464-4e6a-bfac-4109880de91a-kube-api-access-wr4wl\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.029844 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.029874 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.030060 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.030241 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.042766 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.132226 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.132299 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.132323 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-config\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.132369 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.132413 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4wl\" (UniqueName: \"kubernetes.io/projected/e1e1acdf-f464-4e6a-bfac-4109880de91a-kube-api-access-wr4wl\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.132431 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.132451 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.134587 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-config\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.136447 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.137223 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.137581 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.138301 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.141238 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e1e1acdf-f464-4e6a-bfac-4109880de91a-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.192210 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4wl\" (UniqueName: \"kubernetes.io/projected/e1e1acdf-f464-4e6a-bfac-4109880de91a-kube-api-access-wr4wl\") pod \"dnsmasq-dns-5d75f767dc-4tg48\" (UID: \"e1e1acdf-f464-4e6a-bfac-4109880de91a\") " pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.329104 4874 generic.go:334] "Generic (PLEG): container finished" podID="440002d4-28a6-4e11-b188-1921f660e282" containerID="c99e5edddda76210735031ccc2041266f5be1c827131f843824f39b0a51791ad" exitCode=0 Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.342712 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" event={"ID":"440002d4-28a6-4e11-b188-1921f660e282","Type":"ContainerDied","Data":"c99e5edddda76210735031ccc2041266f5be1c827131f843824f39b0a51791ad"} Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.343138 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:56 crc kubenswrapper[4874]: E0217 16:28:56.346649 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.665748 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.767968 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-nb\") pod \"440002d4-28a6-4e11-b188-1921f660e282\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.768006 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckdst\" (UniqueName: \"kubernetes.io/projected/440002d4-28a6-4e11-b188-1921f660e282-kube-api-access-ckdst\") pod \"440002d4-28a6-4e11-b188-1921f660e282\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.768158 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-svc\") pod \"440002d4-28a6-4e11-b188-1921f660e282\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.768214 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-sb\") pod \"440002d4-28a6-4e11-b188-1921f660e282\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.768261 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-swift-storage-0\") pod \"440002d4-28a6-4e11-b188-1921f660e282\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.768357 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-config\") pod \"440002d4-28a6-4e11-b188-1921f660e282\" (UID: \"440002d4-28a6-4e11-b188-1921f660e282\") " Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.775717 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440002d4-28a6-4e11-b188-1921f660e282-kube-api-access-ckdst" (OuterVolumeSpecName: "kube-api-access-ckdst") pod "440002d4-28a6-4e11-b188-1921f660e282" (UID: "440002d4-28a6-4e11-b188-1921f660e282"). InnerVolumeSpecName "kube-api-access-ckdst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.855014 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-config" (OuterVolumeSpecName: "config") pod "440002d4-28a6-4e11-b188-1921f660e282" (UID: "440002d4-28a6-4e11-b188-1921f660e282"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.856512 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "440002d4-28a6-4e11-b188-1921f660e282" (UID: "440002d4-28a6-4e11-b188-1921f660e282"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.870734 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.870775 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.870787 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckdst\" (UniqueName: \"kubernetes.io/projected/440002d4-28a6-4e11-b188-1921f660e282-kube-api-access-ckdst\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.882926 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9628"] Feb 17 16:28:56 crc kubenswrapper[4874]: W0217 16:28:56.901786 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod951dfd00_4b6d_4405_bef4_eac337033cb1.slice/crio-6d6ed0f37fa90c6e9926d06f2c233ad4a82341eacd9f246ac8a28f6fd5d06698 WatchSource:0}: Error finding container 6d6ed0f37fa90c6e9926d06f2c233ad4a82341eacd9f246ac8a28f6fd5d06698: Status 404 returned error can't find the container with id 6d6ed0f37fa90c6e9926d06f2c233ad4a82341eacd9f246ac8a28f6fd5d06698 Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.914494 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "440002d4-28a6-4e11-b188-1921f660e282" (UID: "440002d4-28a6-4e11-b188-1921f660e282"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.918600 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "440002d4-28a6-4e11-b188-1921f660e282" (UID: "440002d4-28a6-4e11-b188-1921f660e282"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.951945 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "440002d4-28a6-4e11-b188-1921f660e282" (UID: "440002d4-28a6-4e11-b188-1921f660e282"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.991767 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.991795 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.991808 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/440002d4-28a6-4e11-b188-1921f660e282-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:56 crc kubenswrapper[4874]: I0217 16:28:56.995788 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-4tg48"] Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.340597 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.340624 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-srfbf" event={"ID":"440002d4-28a6-4e11-b188-1921f660e282","Type":"ContainerDied","Data":"b2edc27d55280e97f3f2ccce93e4e83990d358fd63bf5a8ab12f85aedc36a92f"} Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.340983 4874 scope.go:117] "RemoveContainer" containerID="c99e5edddda76210735031ccc2041266f5be1c827131f843824f39b0a51791ad" Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.344643 4874 generic.go:334] "Generic (PLEG): container finished" podID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerID="25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8" exitCode=0 Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.344709 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9628" event={"ID":"951dfd00-4b6d-4405-bef4-eac337033cb1","Type":"ContainerDied","Data":"25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8"} Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.344732 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9628" event={"ID":"951dfd00-4b6d-4405-bef4-eac337033cb1","Type":"ContainerStarted","Data":"6d6ed0f37fa90c6e9926d06f2c233ad4a82341eacd9f246ac8a28f6fd5d06698"} Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.350609 4874 generic.go:334] "Generic (PLEG): container finished" podID="e1e1acdf-f464-4e6a-bfac-4109880de91a" containerID="6c9cbe3ea5aa29de78f8afc7e25cf35409391e9634c60b6f5caafdf3366c2e02" exitCode=0 Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.350647 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" event={"ID":"e1e1acdf-f464-4e6a-bfac-4109880de91a","Type":"ContainerDied","Data":"6c9cbe3ea5aa29de78f8afc7e25cf35409391e9634c60b6f5caafdf3366c2e02"} Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.350672 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" event={"ID":"e1e1acdf-f464-4e6a-bfac-4109880de91a","Type":"ContainerStarted","Data":"57ada178a83a548d305324e79216b1e9b26d722621003f63cc6adfee532b8beb"} Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.381028 4874 scope.go:117] "RemoveContainer" containerID="03e91dd6c266c94fcd08974d37801ad2931dd121d721ad0f8a3ff60bc09cc5f8" Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.423390 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-srfbf"] Feb 17 16:28:57 crc kubenswrapper[4874]: I0217 16:28:57.435242 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-srfbf"] Feb 17 16:28:58 crc kubenswrapper[4874]: I0217 16:28:58.364017 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9628" event={"ID":"951dfd00-4b6d-4405-bef4-eac337033cb1","Type":"ContainerStarted","Data":"2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8"} Feb 17 16:28:58 crc kubenswrapper[4874]: I0217 16:28:58.366720 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" event={"ID":"e1e1acdf-f464-4e6a-bfac-4109880de91a","Type":"ContainerStarted","Data":"7be0d2d9227cd52ce702284f0be4961aaa1950227df10d43921c5bd0d110791f"} Feb 17 16:28:58 crc kubenswrapper[4874]: I0217 16:28:58.366906 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:28:58 crc kubenswrapper[4874]: I0217 16:28:58.414413 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" podStartSLOduration=3.414387908 podStartE2EDuration="3.414387908s" podCreationTimestamp="2026-02-17 16:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:28:58.405903098 +0000 UTC m=+1548.700291679" watchObservedRunningTime="2026-02-17 16:28:58.414387908 +0000 UTC m=+1548.708776499" Feb 17 16:28:58 crc kubenswrapper[4874]: I0217 16:28:58.477921 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440002d4-28a6-4e11-b188-1921f660e282" path="/var/lib/kubelet/pods/440002d4-28a6-4e11-b188-1921f660e282/volumes" Feb 17 16:28:59 crc kubenswrapper[4874]: I0217 16:28:59.383114 4874 generic.go:334] "Generic (PLEG): container finished" podID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerID="2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8" exitCode=0 Feb 17 16:28:59 crc kubenswrapper[4874]: I0217 16:28:59.383202 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9628" event={"ID":"951dfd00-4b6d-4405-bef4-eac337033cb1","Type":"ContainerDied","Data":"2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8"} Feb 17 16:29:00 crc kubenswrapper[4874]: I0217 16:29:00.130148 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tsb7r" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="registry-server" probeResult="failure" output=< Feb 17 16:29:00 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:29:00 crc kubenswrapper[4874]: > Feb 17 16:29:00 crc kubenswrapper[4874]: I0217 16:29:00.394432 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9628" event={"ID":"951dfd00-4b6d-4405-bef4-eac337033cb1","Type":"ContainerStarted","Data":"e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1"} Feb 17 16:29:00 crc kubenswrapper[4874]: I0217 16:29:00.419658 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l9628" podStartSLOduration=2.9851067799999997 podStartE2EDuration="5.419640373s" podCreationTimestamp="2026-02-17 16:28:55 +0000 UTC" firstStartedPulling="2026-02-17 16:28:57.34779728 +0000 UTC m=+1547.642185861" lastFinishedPulling="2026-02-17 16:28:59.782330893 +0000 UTC m=+1550.076719454" observedRunningTime="2026-02-17 16:29:00.409802891 +0000 UTC m=+1550.704191452" watchObservedRunningTime="2026-02-17 16:29:00.419640373 +0000 UTC m=+1550.714028934" Feb 17 16:29:06 crc kubenswrapper[4874]: I0217 16:29:06.043481 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:29:06 crc kubenswrapper[4874]: I0217 16:29:06.044147 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:29:06 crc kubenswrapper[4874]: I0217 16:29:06.101457 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:29:06 crc kubenswrapper[4874]: I0217 16:29:06.345382 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-4tg48" Feb 17 16:29:06 crc kubenswrapper[4874]: I0217 16:29:06.420065 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-rht8r"] Feb 17 16:29:06 crc kubenswrapper[4874]: I0217 16:29:06.423257 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" podUID="dbcbc20c-0642-45c3-a518-94127296de34" containerName="dnsmasq-dns" containerID="cri-o://0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9" gracePeriod=10 Feb 17 16:29:06 crc kubenswrapper[4874]: I0217 16:29:06.554995 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:29:06 crc kubenswrapper[4874]: E0217 16:29:06.610899 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:29:06 crc kubenswrapper[4874]: E0217 16:29:06.610956 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:29:06 crc kubenswrapper[4874]: E0217 16:29:06.611115 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:29:06 crc kubenswrapper[4874]: E0217 16:29:06.612334 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:29:06 crc kubenswrapper[4874]: I0217 16:29:06.621470 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9628"] Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.111849 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.205894 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-config\") pod \"dbcbc20c-0642-45c3-a518-94127296de34\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.206039 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnjsn\" (UniqueName: \"kubernetes.io/projected/dbcbc20c-0642-45c3-a518-94127296de34-kube-api-access-bnjsn\") pod \"dbcbc20c-0642-45c3-a518-94127296de34\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.206587 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-sb\") pod \"dbcbc20c-0642-45c3-a518-94127296de34\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.206653 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-nb\") pod \"dbcbc20c-0642-45c3-a518-94127296de34\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.206676 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-svc\") pod \"dbcbc20c-0642-45c3-a518-94127296de34\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.206861 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-openstack-edpm-ipam\") pod \"dbcbc20c-0642-45c3-a518-94127296de34\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.206911 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-swift-storage-0\") pod \"dbcbc20c-0642-45c3-a518-94127296de34\" (UID: \"dbcbc20c-0642-45c3-a518-94127296de34\") " Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.214901 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcbc20c-0642-45c3-a518-94127296de34-kube-api-access-bnjsn" (OuterVolumeSpecName: "kube-api-access-bnjsn") pod "dbcbc20c-0642-45c3-a518-94127296de34" (UID: "dbcbc20c-0642-45c3-a518-94127296de34"). InnerVolumeSpecName "kube-api-access-bnjsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.297393 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dbcbc20c-0642-45c3-a518-94127296de34" (UID: "dbcbc20c-0642-45c3-a518-94127296de34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.300390 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dbcbc20c-0642-45c3-a518-94127296de34" (UID: "dbcbc20c-0642-45c3-a518-94127296de34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.301221 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "dbcbc20c-0642-45c3-a518-94127296de34" (UID: "dbcbc20c-0642-45c3-a518-94127296de34"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.310427 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnjsn\" (UniqueName: \"kubernetes.io/projected/dbcbc20c-0642-45c3-a518-94127296de34-kube-api-access-bnjsn\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.310456 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.310466 4874 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.310473 4874 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.311663 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbcbc20c-0642-45c3-a518-94127296de34" (UID: "dbcbc20c-0642-45c3-a518-94127296de34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.311849 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-config" (OuterVolumeSpecName: "config") pod "dbcbc20c-0642-45c3-a518-94127296de34" (UID: "dbcbc20c-0642-45c3-a518-94127296de34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.336085 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dbcbc20c-0642-45c3-a518-94127296de34" (UID: "dbcbc20c-0642-45c3-a518-94127296de34"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.412665 4874 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.412701 4874 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.412734 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcbc20c-0642-45c3-a518-94127296de34-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.493391 4874 generic.go:334] "Generic (PLEG): container finished" podID="dbcbc20c-0642-45c3-a518-94127296de34" containerID="0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9" exitCode=0 Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.493434 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.493435 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" event={"ID":"dbcbc20c-0642-45c3-a518-94127296de34","Type":"ContainerDied","Data":"0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9"} Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.493496 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-rht8r" event={"ID":"dbcbc20c-0642-45c3-a518-94127296de34","Type":"ContainerDied","Data":"f801c5bb45a4fe442a82d6a3802e819243fdbb6dc5a6ab7aabe58e5e92dcc14b"} Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.493518 4874 scope.go:117] "RemoveContainer" containerID="0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.521851 4874 scope.go:117] "RemoveContainer" containerID="bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.534525 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-rht8r"] Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.549415 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-rht8r"] Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.564368 4874 scope.go:117] "RemoveContainer" containerID="0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9" Feb 17 16:29:07 crc kubenswrapper[4874]: E0217 16:29:07.564685 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9\": container with ID starting with 0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9 not found: ID does not exist" containerID="0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.564719 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9"} err="failed to get container status \"0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9\": rpc error: code = NotFound desc = could not find container \"0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9\": container with ID starting with 0ce222cd544c031d56e02a14b9fed7da86839940eb4e89f0d84a9845ce0ba7f9 not found: ID does not exist" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.564740 4874 scope.go:117] "RemoveContainer" containerID="bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2" Feb 17 16:29:07 crc kubenswrapper[4874]: E0217 16:29:07.565015 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2\": container with ID starting with bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2 not found: ID does not exist" containerID="bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2" Feb 17 16:29:07 crc kubenswrapper[4874]: I0217 16:29:07.565246 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2"} err="failed to get container status \"bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2\": rpc error: code = NotFound desc = could not find container \"bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2\": container with ID starting with bf272659b26d6d7a01930a7a4fc3022b39020c81a9626441eed0a21056412ad2 not found: ID does not exist" Feb 17 16:29:08 crc kubenswrapper[4874]: I0217 16:29:08.478336 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbcbc20c-0642-45c3-a518-94127296de34" path="/var/lib/kubelet/pods/dbcbc20c-0642-45c3-a518-94127296de34/volumes" Feb 17 16:29:08 crc kubenswrapper[4874]: I0217 16:29:08.509876 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l9628" podUID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerName="registry-server" containerID="cri-o://e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1" gracePeriod=2 Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.128697 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.141297 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.197473 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.260526 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-catalog-content\") pod \"951dfd00-4b6d-4405-bef4-eac337033cb1\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.260897 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln8hf\" (UniqueName: \"kubernetes.io/projected/951dfd00-4b6d-4405-bef4-eac337033cb1-kube-api-access-ln8hf\") pod \"951dfd00-4b6d-4405-bef4-eac337033cb1\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.260922 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-utilities\") pod \"951dfd00-4b6d-4405-bef4-eac337033cb1\" (UID: \"951dfd00-4b6d-4405-bef4-eac337033cb1\") " Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.262063 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-utilities" (OuterVolumeSpecName: "utilities") pod "951dfd00-4b6d-4405-bef4-eac337033cb1" (UID: "951dfd00-4b6d-4405-bef4-eac337033cb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.269190 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951dfd00-4b6d-4405-bef4-eac337033cb1-kube-api-access-ln8hf" (OuterVolumeSpecName: "kube-api-access-ln8hf") pod "951dfd00-4b6d-4405-bef4-eac337033cb1" (UID: "951dfd00-4b6d-4405-bef4-eac337033cb1"). InnerVolumeSpecName "kube-api-access-ln8hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.291787 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "951dfd00-4b6d-4405-bef4-eac337033cb1" (UID: "951dfd00-4b6d-4405-bef4-eac337033cb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.364543 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln8hf\" (UniqueName: \"kubernetes.io/projected/951dfd00-4b6d-4405-bef4-eac337033cb1-kube-api-access-ln8hf\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.364831 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.364937 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951dfd00-4b6d-4405-bef4-eac337033cb1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.528642 4874 generic.go:334] "Generic (PLEG): container finished" podID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerID="e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1" exitCode=0 Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.528727 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9628" event={"ID":"951dfd00-4b6d-4405-bef4-eac337033cb1","Type":"ContainerDied","Data":"e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1"} Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.528810 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9628" event={"ID":"951dfd00-4b6d-4405-bef4-eac337033cb1","Type":"ContainerDied","Data":"6d6ed0f37fa90c6e9926d06f2c233ad4a82341eacd9f246ac8a28f6fd5d06698"} Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.528844 4874 scope.go:117] "RemoveContainer" containerID="e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.530377 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9628" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.572293 4874 scope.go:117] "RemoveContainer" containerID="2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.610565 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9628"] Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.625673 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9628"] Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.635500 4874 scope.go:117] "RemoveContainer" containerID="25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.705548 4874 scope.go:117] "RemoveContainer" containerID="e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1" Feb 17 16:29:09 crc kubenswrapper[4874]: E0217 16:29:09.706500 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1\": container with ID starting with e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1 not found: ID does not exist" containerID="e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.706563 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1"} err="failed to get container status \"e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1\": rpc error: code = NotFound desc = could not find container \"e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1\": container with ID starting with e16b69389bcfff0dc0e581f22f5bbda99cdf998f8c79c089fb09a2f7c47c24f1 not found: ID does not exist" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.706601 4874 scope.go:117] "RemoveContainer" containerID="2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8" Feb 17 16:29:09 crc kubenswrapper[4874]: E0217 16:29:09.707148 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8\": container with ID starting with 2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8 not found: ID does not exist" containerID="2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.707198 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8"} err="failed to get container status \"2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8\": rpc error: code = NotFound desc = could not find container \"2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8\": container with ID starting with 2057ddc01ed7e5e8868d6398661863ae13e9a9f6abecfad034ad86031b2161b8 not found: ID does not exist" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.707224 4874 scope.go:117] "RemoveContainer" containerID="25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8" Feb 17 16:29:09 crc kubenswrapper[4874]: E0217 16:29:09.707604 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8\": container with ID starting with 25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8 not found: ID does not exist" containerID="25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8" Feb 17 16:29:09 crc kubenswrapper[4874]: I0217 16:29:09.707666 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8"} err="failed to get container status \"25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8\": rpc error: code = NotFound desc = could not find container \"25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8\": container with ID starting with 25fe24d7c640351c4aeb2c5adc67a3a616581c1cc9dd1eaf6d5f4e01bd44d5e8 not found: ID does not exist" Feb 17 16:29:10 crc kubenswrapper[4874]: I0217 16:29:10.148032 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsb7r"] Feb 17 16:29:10 crc kubenswrapper[4874]: E0217 16:29:10.494149 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:29:10 crc kubenswrapper[4874]: I0217 16:29:10.500586 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951dfd00-4b6d-4405-bef4-eac337033cb1" path="/var/lib/kubelet/pods/951dfd00-4b6d-4405-bef4-eac337033cb1/volumes" Feb 17 16:29:10 crc kubenswrapper[4874]: I0217 16:29:10.546951 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tsb7r" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="registry-server" containerID="cri-o://2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d" gracePeriod=2 Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.228847 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.326587 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-catalog-content\") pod \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.326745 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5lsn\" (UniqueName: \"kubernetes.io/projected/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-kube-api-access-n5lsn\") pod \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.326911 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-utilities\") pod \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\" (UID: \"5bce9f58-6b2c-4af7-8c50-374e27e96f5c\") " Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.327683 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-utilities" (OuterVolumeSpecName: "utilities") pod "5bce9f58-6b2c-4af7-8c50-374e27e96f5c" (UID: "5bce9f58-6b2c-4af7-8c50-374e27e96f5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.332601 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-kube-api-access-n5lsn" (OuterVolumeSpecName: "kube-api-access-n5lsn") pod "5bce9f58-6b2c-4af7-8c50-374e27e96f5c" (UID: "5bce9f58-6b2c-4af7-8c50-374e27e96f5c"). InnerVolumeSpecName "kube-api-access-n5lsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.432996 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5lsn\" (UniqueName: \"kubernetes.io/projected/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-kube-api-access-n5lsn\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.433035 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.484170 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bce9f58-6b2c-4af7-8c50-374e27e96f5c" (UID: "5bce9f58-6b2c-4af7-8c50-374e27e96f5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.535806 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bce9f58-6b2c-4af7-8c50-374e27e96f5c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.559381 4874 generic.go:334] "Generic (PLEG): container finished" podID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerID="2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d" exitCode=0 Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.559659 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsb7r" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.559710 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsb7r" event={"ID":"5bce9f58-6b2c-4af7-8c50-374e27e96f5c","Type":"ContainerDied","Data":"2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d"} Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.560664 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsb7r" event={"ID":"5bce9f58-6b2c-4af7-8c50-374e27e96f5c","Type":"ContainerDied","Data":"699522bd936221ea1bb0cadd2efea75b7046be518f9969e4ffaa0f94e4a1a76b"} Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.560691 4874 scope.go:117] "RemoveContainer" containerID="2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.584375 4874 scope.go:117] "RemoveContainer" containerID="4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.619569 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsb7r"] Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.622927 4874 scope.go:117] "RemoveContainer" containerID="b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.638641 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tsb7r"] Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.675738 4874 scope.go:117] "RemoveContainer" containerID="2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d" Feb 17 16:29:11 crc kubenswrapper[4874]: E0217 16:29:11.676216 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d\": container with ID starting with 2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d not found: ID does not exist" containerID="2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.676266 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d"} err="failed to get container status \"2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d\": rpc error: code = NotFound desc = could not find container \"2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d\": container with ID starting with 2a5c720f969dd60f144cbc074db653f223bef6616d01b0ec62ddf7abdd3a573d not found: ID does not exist" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.676300 4874 scope.go:117] "RemoveContainer" containerID="4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7" Feb 17 16:29:11 crc kubenswrapper[4874]: E0217 16:29:11.676660 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7\": container with ID starting with 4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7 not found: ID does not exist" containerID="4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.676727 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7"} err="failed to get container status \"4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7\": rpc error: code = NotFound desc = could not find container \"4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7\": container with ID starting with 4c423ab0e60efc46f010f499e4bf3af18c6ad402909a86538d8148ed979f3cd7 not found: ID does not exist" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.676764 4874 scope.go:117] "RemoveContainer" containerID="b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb" Feb 17 16:29:11 crc kubenswrapper[4874]: E0217 16:29:11.677043 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb\": container with ID starting with b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb not found: ID does not exist" containerID="b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb" Feb 17 16:29:11 crc kubenswrapper[4874]: I0217 16:29:11.677112 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb"} err="failed to get container status \"b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb\": rpc error: code = NotFound desc = could not find container \"b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb\": container with ID starting with b4e34b0a39c83f03536a7104be72d6e199a2078be5493aa994342fa4743feebb not found: ID does not exist" Feb 17 16:29:12 crc kubenswrapper[4874]: I0217 16:29:12.486773 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" path="/var/lib/kubelet/pods/5bce9f58-6b2c-4af7-8c50-374e27e96f5c/volumes" Feb 17 16:29:18 crc kubenswrapper[4874]: E0217 16:29:18.460740 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:29:18 crc kubenswrapper[4874]: I0217 16:29:18.663325 4874 generic.go:334] "Generic (PLEG): container finished" podID="850560f1-d14c-45d2-9526-e7aa266d3427" containerID="ea7e9d03619473f3e9d9dabb419f2ee4b6969876ac0d9729d3c66f759d848789" exitCode=0 Feb 17 16:29:18 crc kubenswrapper[4874]: I0217 16:29:18.663371 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"850560f1-d14c-45d2-9526-e7aa266d3427","Type":"ContainerDied","Data":"ea7e9d03619473f3e9d9dabb419f2ee4b6969876ac0d9729d3c66f759d848789"} Feb 17 16:29:18 crc kubenswrapper[4874]: I0217 16:29:18.665536 4874 generic.go:334] "Generic (PLEG): container finished" podID="efb51498-72fd-4e39-8bdd-dda0b1abe44a" containerID="dc24a7186281417bc3e7878186ae852155495c062c7b6221e59aad3397f8bd15" exitCode=0 Feb 17 16:29:18 crc kubenswrapper[4874]: I0217 16:29:18.665581 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"efb51498-72fd-4e39-8bdd-dda0b1abe44a","Type":"ContainerDied","Data":"dc24a7186281417bc3e7878186ae852155495c062c7b6221e59aad3397f8bd15"} Feb 17 16:29:19 crc kubenswrapper[4874]: I0217 16:29:19.681598 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"850560f1-d14c-45d2-9526-e7aa266d3427","Type":"ContainerStarted","Data":"d64d06a56e1222b6784c91da2775d79844f25e81800006653942a9f5a0c2e92d"} Feb 17 16:29:19 crc kubenswrapper[4874]: I0217 16:29:19.682292 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 17 16:29:19 crc kubenswrapper[4874]: I0217 16:29:19.685167 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"efb51498-72fd-4e39-8bdd-dda0b1abe44a","Type":"ContainerStarted","Data":"d535962bf36538eb362064e60f46025aea6bcabc19492e84584ddba48d771002"} Feb 17 16:29:19 crc kubenswrapper[4874]: I0217 16:29:19.685552 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:29:19 crc kubenswrapper[4874]: I0217 16:29:19.720208 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=37.720188172 podStartE2EDuration="37.720188172s" podCreationTimestamp="2026-02-17 16:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:29:19.709314823 +0000 UTC m=+1570.003703395" watchObservedRunningTime="2026-02-17 16:29:19.720188172 +0000 UTC m=+1570.014576733" Feb 17 16:29:19 crc kubenswrapper[4874]: I0217 16:29:19.753483 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.753463573 podStartE2EDuration="36.753463573s" podCreationTimestamp="2026-02-17 16:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:29:19.749830513 +0000 UTC m=+1570.044219074" watchObservedRunningTime="2026-02-17 16:29:19.753463573 +0000 UTC m=+1570.047852154" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.727378 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb"] Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728279 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcbc20c-0642-45c3-a518-94127296de34" containerName="dnsmasq-dns" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728304 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcbc20c-0642-45c3-a518-94127296de34" containerName="dnsmasq-dns" Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728322 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerName="registry-server" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728331 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerName="registry-server" Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728349 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcbc20c-0642-45c3-a518-94127296de34" containerName="init" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728357 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcbc20c-0642-45c3-a518-94127296de34" containerName="init" Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728368 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerName="extract-utilities" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728376 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerName="extract-utilities" Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728396 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerName="extract-content" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728404 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerName="extract-content" Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728420 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="registry-server" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728427 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="registry-server" Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728443 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="extract-content" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728451 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="extract-content" Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728471 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="extract-utilities" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728478 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="extract-utilities" Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728492 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440002d4-28a6-4e11-b188-1921f660e282" containerName="dnsmasq-dns" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728500 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="440002d4-28a6-4e11-b188-1921f660e282" containerName="dnsmasq-dns" Feb 17 16:29:20 crc kubenswrapper[4874]: E0217 16:29:20.728525 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440002d4-28a6-4e11-b188-1921f660e282" containerName="init" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728531 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="440002d4-28a6-4e11-b188-1921f660e282" containerName="init" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728798 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bce9f58-6b2c-4af7-8c50-374e27e96f5c" containerName="registry-server" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728832 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcbc20c-0642-45c3-a518-94127296de34" containerName="dnsmasq-dns" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728856 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="951dfd00-4b6d-4405-bef4-eac337033cb1" containerName="registry-server" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.728873 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="440002d4-28a6-4e11-b188-1921f660e282" containerName="dnsmasq-dns" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.729935 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.739837 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb"] Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.765657 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.765688 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dsqj4" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.765801 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.766091 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.886461 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.887187 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x7q7\" (UniqueName: \"kubernetes.io/projected/ebd0edb1-118f-426b-96ef-72db8d6c2b90-kube-api-access-7x7q7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.887276 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.887441 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.988594 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x7q7\" (UniqueName: \"kubernetes.io/projected/ebd0edb1-118f-426b-96ef-72db8d6c2b90-kube-api-access-7x7q7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.988646 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.988690 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.988723 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.994202 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.994845 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:20 crc kubenswrapper[4874]: I0217 16:29:20.995761 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:21 crc kubenswrapper[4874]: I0217 16:29:21.011438 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x7q7\" (UniqueName: \"kubernetes.io/projected/ebd0edb1-118f-426b-96ef-72db8d6c2b90-kube-api-access-7x7q7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:21 crc kubenswrapper[4874]: I0217 16:29:21.087282 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:21 crc kubenswrapper[4874]: I0217 16:29:21.965277 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb"] Feb 17 16:29:22 crc kubenswrapper[4874]: I0217 16:29:22.720788 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" event={"ID":"ebd0edb1-118f-426b-96ef-72db8d6c2b90","Type":"ContainerStarted","Data":"26d5966b2f1684c912ed8ff1554e93378a32c35d52b22e1e84270a1cc363078d"} Feb 17 16:29:25 crc kubenswrapper[4874]: E0217 16:29:25.582801 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:29:25 crc kubenswrapper[4874]: E0217 16:29:25.583356 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:29:25 crc kubenswrapper[4874]: E0217 16:29:25.583510 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:29:25 crc kubenswrapper[4874]: E0217 16:29:25.584705 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:29:29 crc kubenswrapper[4874]: E0217 16:29:29.460623 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:29:33 crc kubenswrapper[4874]: I0217 16:29:33.117260 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 17 16:29:33 crc kubenswrapper[4874]: I0217 16:29:33.178676 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:29:33 crc kubenswrapper[4874]: I0217 16:29:33.889786 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" event={"ID":"ebd0edb1-118f-426b-96ef-72db8d6c2b90","Type":"ContainerStarted","Data":"a762e730965b6aefbbe9bf5f1a6c66f416cfa5d884bfceb086ee312a6b12d43b"} Feb 17 16:29:33 crc kubenswrapper[4874]: I0217 16:29:33.903943 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:29:33 crc kubenswrapper[4874]: I0217 16:29:33.913725 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" podStartSLOduration=3.164723263 podStartE2EDuration="13.913702952s" podCreationTimestamp="2026-02-17 16:29:20 +0000 UTC" firstStartedPulling="2026-02-17 16:29:21.969491731 +0000 UTC m=+1572.263880292" lastFinishedPulling="2026-02-17 16:29:32.7184714 +0000 UTC m=+1583.012859981" observedRunningTime="2026-02-17 16:29:33.907175971 +0000 UTC m=+1584.201564552" watchObservedRunningTime="2026-02-17 16:29:33.913702952 +0000 UTC m=+1584.208091523" Feb 17 16:29:37 crc kubenswrapper[4874]: I0217 16:29:37.500844 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="37707c24-e133-484d-955f-57a20ec147b1" containerName="rabbitmq" containerID="cri-o://5b477107987341f58781f2906066cdd417998032a437d5fd1b340bbada577f70" gracePeriod=604796 Feb 17 16:29:40 crc kubenswrapper[4874]: E0217 16:29:40.469537 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:29:42 crc kubenswrapper[4874]: E0217 16:29:42.462045 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:29:43 crc kubenswrapper[4874]: E0217 16:29:43.841762 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37707c24_e133_484d_955f_57a20ec147b1.slice/crio-conmon-5b477107987341f58781f2906066cdd417998032a437d5fd1b340bbada577f70.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.014485 4874 generic.go:334] "Generic (PLEG): container finished" podID="37707c24-e133-484d-955f-57a20ec147b1" containerID="5b477107987341f58781f2906066cdd417998032a437d5fd1b340bbada577f70" exitCode=0 Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.014532 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37707c24-e133-484d-955f-57a20ec147b1","Type":"ContainerDied","Data":"5b477107987341f58781f2906066cdd417998032a437d5fd1b340bbada577f70"} Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.221587 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.379131 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-confd\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.379905 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-plugins-conf\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.380006 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-server-conf\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.380193 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37707c24-e133-484d-955f-57a20ec147b1-erlang-cookie-secret\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.380275 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-plugins\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.380354 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37707c24-e133-484d-955f-57a20ec147b1-pod-info\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.380457 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-config-data\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.381863 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.381992 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-tls\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.382040 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nvqt\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-kube-api-access-8nvqt\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.382123 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-erlang-cookie\") pod \"37707c24-e133-484d-955f-57a20ec147b1\" (UID: \"37707c24-e133-484d-955f-57a20ec147b1\") " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.382292 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.383452 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.383749 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.386294 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.396495 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37707c24-e133-484d-955f-57a20ec147b1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.396580 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/37707c24-e133-484d-955f-57a20ec147b1-pod-info" (OuterVolumeSpecName: "pod-info") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.400777 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.420694 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-kube-api-access-8nvqt" (OuterVolumeSpecName: "kube-api-access-8nvqt") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "kube-api-access-8nvqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.430450 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-config-data" (OuterVolumeSpecName: "config-data") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.449288 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d" (OuterVolumeSpecName: "persistence") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "pvc-baddbb75-df00-4064-b46c-518cd100e31d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.489022 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.489091 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nvqt\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-kube-api-access-8nvqt\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.489109 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.489122 4874 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.489132 4874 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/37707c24-e133-484d-955f-57a20ec147b1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.489142 4874 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/37707c24-e133-484d-955f-57a20ec147b1-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.489150 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.489186 4874 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") on node \"crc\" " Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.491501 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-server-conf" (OuterVolumeSpecName: "server-conf") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.521758 4874 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.522000 4874 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-baddbb75-df00-4064-b46c-518cd100e31d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d") on node "crc" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.550464 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "37707c24-e133-484d-955f-57a20ec147b1" (UID: "37707c24-e133-484d-955f-57a20ec147b1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.591193 4874 reconciler_common.go:293] "Volume detached for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.591233 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/37707c24-e133-484d-955f-57a20ec147b1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:44 crc kubenswrapper[4874]: I0217 16:29:44.591247 4874 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/37707c24-e133-484d-955f-57a20ec147b1-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.028577 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"37707c24-e133-484d-955f-57a20ec147b1","Type":"ContainerDied","Data":"8b6eac39626336380a637ffb52c23f53a17735099cbb674fe479447d2f1c66c0"} Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.028978 4874 scope.go:117] "RemoveContainer" containerID="5b477107987341f58781f2906066cdd417998032a437d5fd1b340bbada577f70" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.028723 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.074691 4874 scope.go:117] "RemoveContainer" containerID="1eb6dabf17b342d2327164ae121cc80c313bb12e86bc551602ad09c3ceea3b65" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.080197 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.098101 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.119612 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:29:45 crc kubenswrapper[4874]: E0217 16:29:45.120257 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37707c24-e133-484d-955f-57a20ec147b1" containerName="rabbitmq" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.120281 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="37707c24-e133-484d-955f-57a20ec147b1" containerName="rabbitmq" Feb 17 16:29:45 crc kubenswrapper[4874]: E0217 16:29:45.120325 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37707c24-e133-484d-955f-57a20ec147b1" containerName="setup-container" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.120334 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="37707c24-e133-484d-955f-57a20ec147b1" containerName="setup-container" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.120635 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="37707c24-e133-484d-955f-57a20ec147b1" containerName="rabbitmq" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.122243 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.137615 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206703 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206743 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206772 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aafddb04-57ad-45b6-8a34-30898a8bafff-pod-info\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206792 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aafddb04-57ad-45b6-8a34-30898a8bafff-config-data\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206806 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206827 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q42kz\" (UniqueName: \"kubernetes.io/projected/aafddb04-57ad-45b6-8a34-30898a8bafff-kube-api-access-q42kz\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206845 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aafddb04-57ad-45b6-8a34-30898a8bafff-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206887 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aafddb04-57ad-45b6-8a34-30898a8bafff-server-conf\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206944 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.206971 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.207009 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aafddb04-57ad-45b6-8a34-30898a8bafff-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309194 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309249 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309289 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aafddb04-57ad-45b6-8a34-30898a8bafff-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309393 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309410 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309437 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aafddb04-57ad-45b6-8a34-30898a8bafff-pod-info\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309458 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309473 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aafddb04-57ad-45b6-8a34-30898a8bafff-config-data\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309492 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q42kz\" (UniqueName: \"kubernetes.io/projected/aafddb04-57ad-45b6-8a34-30898a8bafff-kube-api-access-q42kz\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309511 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aafddb04-57ad-45b6-8a34-30898a8bafff-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.309558 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aafddb04-57ad-45b6-8a34-30898a8bafff-server-conf\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.310237 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.310797 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.311085 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aafddb04-57ad-45b6-8a34-30898a8bafff-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.311284 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aafddb04-57ad-45b6-8a34-30898a8bafff-config-data\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.311548 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aafddb04-57ad-45b6-8a34-30898a8bafff-server-conf\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.313230 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.313262 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/987695d7a0e83bf6f0861a06e26b4ab95287a2edd1b9a9790bcdf5ca773dbb27/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.313584 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aafddb04-57ad-45b6-8a34-30898a8bafff-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.313936 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aafddb04-57ad-45b6-8a34-30898a8bafff-pod-info\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.315194 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.316688 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aafddb04-57ad-45b6-8a34-30898a8bafff-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.330194 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q42kz\" (UniqueName: \"kubernetes.io/projected/aafddb04-57ad-45b6-8a34-30898a8bafff-kube-api-access-q42kz\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.382311 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-baddbb75-df00-4064-b46c-518cd100e31d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baddbb75-df00-4064-b46c-518cd100e31d\") pod \"rabbitmq-server-1\" (UID: \"aafddb04-57ad-45b6-8a34-30898a8bafff\") " pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.468337 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.592419 4874 scope.go:117] "RemoveContainer" containerID="72aac92bfbd26bb9c5db0d9d70ad1f79ac31b3e8ef267357ee900a7e75f478c8" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.640374 4874 scope.go:117] "RemoveContainer" containerID="774c4325997a9849188504081d763f3e4caee3b24245b2ffa8f4bd92b197c5ff" Feb 17 16:29:45 crc kubenswrapper[4874]: I0217 16:29:45.987509 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:29:45 crc kubenswrapper[4874]: W0217 16:29:45.990173 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaafddb04_57ad_45b6_8a34_30898a8bafff.slice/crio-9142271aa2f0cfcf4cb96cb6684a91328346c2aa2b05de93b50d77afca9fe45f WatchSource:0}: Error finding container 9142271aa2f0cfcf4cb96cb6684a91328346c2aa2b05de93b50d77afca9fe45f: Status 404 returned error can't find the container with id 9142271aa2f0cfcf4cb96cb6684a91328346c2aa2b05de93b50d77afca9fe45f Feb 17 16:29:46 crc kubenswrapper[4874]: I0217 16:29:46.040525 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"aafddb04-57ad-45b6-8a34-30898a8bafff","Type":"ContainerStarted","Data":"9142271aa2f0cfcf4cb96cb6684a91328346c2aa2b05de93b50d77afca9fe45f"} Feb 17 16:29:46 crc kubenswrapper[4874]: I0217 16:29:46.475422 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37707c24-e133-484d-955f-57a20ec147b1" path="/var/lib/kubelet/pods/37707c24-e133-484d-955f-57a20ec147b1/volumes" Feb 17 16:29:47 crc kubenswrapper[4874]: I0217 16:29:47.053439 4874 generic.go:334] "Generic (PLEG): container finished" podID="ebd0edb1-118f-426b-96ef-72db8d6c2b90" containerID="a762e730965b6aefbbe9bf5f1a6c66f416cfa5d884bfceb086ee312a6b12d43b" exitCode=0 Feb 17 16:29:47 crc kubenswrapper[4874]: I0217 16:29:47.053484 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" event={"ID":"ebd0edb1-118f-426b-96ef-72db8d6c2b90","Type":"ContainerDied","Data":"a762e730965b6aefbbe9bf5f1a6c66f416cfa5d884bfceb086ee312a6b12d43b"} Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.660902 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.698276 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-repo-setup-combined-ca-bundle\") pod \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.698424 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-ssh-key-openstack-edpm-ipam\") pod \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.698521 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-inventory\") pod \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.698624 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x7q7\" (UniqueName: \"kubernetes.io/projected/ebd0edb1-118f-426b-96ef-72db8d6c2b90-kube-api-access-7x7q7\") pod \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\" (UID: \"ebd0edb1-118f-426b-96ef-72db8d6c2b90\") " Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.711401 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd0edb1-118f-426b-96ef-72db8d6c2b90-kube-api-access-7x7q7" (OuterVolumeSpecName: "kube-api-access-7x7q7") pod "ebd0edb1-118f-426b-96ef-72db8d6c2b90" (UID: "ebd0edb1-118f-426b-96ef-72db8d6c2b90"). InnerVolumeSpecName "kube-api-access-7x7q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.732929 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x7q7\" (UniqueName: \"kubernetes.io/projected/ebd0edb1-118f-426b-96ef-72db8d6c2b90-kube-api-access-7x7q7\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.745034 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ebd0edb1-118f-426b-96ef-72db8d6c2b90" (UID: "ebd0edb1-118f-426b-96ef-72db8d6c2b90"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.749731 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ebd0edb1-118f-426b-96ef-72db8d6c2b90" (UID: "ebd0edb1-118f-426b-96ef-72db8d6c2b90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.759971 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-inventory" (OuterVolumeSpecName: "inventory") pod "ebd0edb1-118f-426b-96ef-72db8d6c2b90" (UID: "ebd0edb1-118f-426b-96ef-72db8d6c2b90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.835667 4874 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.835740 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.835774 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebd0edb1-118f-426b-96ef-72db8d6c2b90-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:48 crc kubenswrapper[4874]: I0217 16:29:48.891240 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="37707c24-e133-484d-955f-57a20ec147b1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.078541 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.078543 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb" event={"ID":"ebd0edb1-118f-426b-96ef-72db8d6c2b90","Type":"ContainerDied","Data":"26d5966b2f1684c912ed8ff1554e93378a32c35d52b22e1e84270a1cc363078d"} Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.078677 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d5966b2f1684c912ed8ff1554e93378a32c35d52b22e1e84270a1cc363078d" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.080939 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"aafddb04-57ad-45b6-8a34-30898a8bafff","Type":"ContainerStarted","Data":"b2cb79780409d806b4457f08325f6e3c4718c37c9f2e4fb1ef1a3ab769e599c0"} Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.205756 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76"] Feb 17 16:29:49 crc kubenswrapper[4874]: E0217 16:29:49.206487 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd0edb1-118f-426b-96ef-72db8d6c2b90" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.206507 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd0edb1-118f-426b-96ef-72db8d6c2b90" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.206750 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd0edb1-118f-426b-96ef-72db8d6c2b90" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.207621 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.209361 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.209655 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.209780 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dsqj4" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.210943 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.218991 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76"] Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.348276 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8hf76\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.348391 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq9hd\" (UniqueName: \"kubernetes.io/projected/eee3af83-dd4f-4fa9-b1d9-f3e197174816-kube-api-access-nq9hd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8hf76\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.348420 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8hf76\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.451293 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8hf76\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.451429 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq9hd\" (UniqueName: \"kubernetes.io/projected/eee3af83-dd4f-4fa9-b1d9-f3e197174816-kube-api-access-nq9hd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8hf76\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.451469 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8hf76\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.456676 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8hf76\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.459540 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8hf76\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.474987 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq9hd\" (UniqueName: \"kubernetes.io/projected/eee3af83-dd4f-4fa9-b1d9-f3e197174816-kube-api-access-nq9hd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8hf76\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:49 crc kubenswrapper[4874]: I0217 16:29:49.529465 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:50 crc kubenswrapper[4874]: I0217 16:29:50.130466 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76"] Feb 17 16:29:51 crc kubenswrapper[4874]: I0217 16:29:51.119371 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" event={"ID":"eee3af83-dd4f-4fa9-b1d9-f3e197174816","Type":"ContainerStarted","Data":"608f88f464b24bfb655bdc094d9ca2592f5ab8504e98ca35415a66d4f383848f"} Feb 17 16:29:51 crc kubenswrapper[4874]: I0217 16:29:51.119690 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" event={"ID":"eee3af83-dd4f-4fa9-b1d9-f3e197174816","Type":"ContainerStarted","Data":"167699a23e32c1dcd5efdc2a5eaa23f4d8274806acd9948d8326a04e7edfeedf"} Feb 17 16:29:51 crc kubenswrapper[4874]: I0217 16:29:51.144207 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" podStartSLOduration=1.766301347 podStartE2EDuration="2.144183284s" podCreationTimestamp="2026-02-17 16:29:49 +0000 UTC" firstStartedPulling="2026-02-17 16:29:50.143524935 +0000 UTC m=+1600.437913496" lastFinishedPulling="2026-02-17 16:29:50.521406872 +0000 UTC m=+1600.815795433" observedRunningTime="2026-02-17 16:29:51.140676227 +0000 UTC m=+1601.435064818" watchObservedRunningTime="2026-02-17 16:29:51.144183284 +0000 UTC m=+1601.438571875" Feb 17 16:29:54 crc kubenswrapper[4874]: I0217 16:29:54.150214 4874 generic.go:334] "Generic (PLEG): container finished" podID="eee3af83-dd4f-4fa9-b1d9-f3e197174816" containerID="608f88f464b24bfb655bdc094d9ca2592f5ab8504e98ca35415a66d4f383848f" exitCode=0 Feb 17 16:29:54 crc kubenswrapper[4874]: I0217 16:29:54.150274 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" event={"ID":"eee3af83-dd4f-4fa9-b1d9-f3e197174816","Type":"ContainerDied","Data":"608f88f464b24bfb655bdc094d9ca2592f5ab8504e98ca35415a66d4f383848f"} Feb 17 16:29:55 crc kubenswrapper[4874]: E0217 16:29:55.460197 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:29:55 crc kubenswrapper[4874]: E0217 16:29:55.585372 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:29:55 crc kubenswrapper[4874]: E0217 16:29:55.585428 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:29:55 crc kubenswrapper[4874]: E0217 16:29:55.585540 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:29:55 crc kubenswrapper[4874]: E0217 16:29:55.586848 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.724709 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.812323 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-inventory\") pod \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.812472 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-ssh-key-openstack-edpm-ipam\") pod \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.812494 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq9hd\" (UniqueName: \"kubernetes.io/projected/eee3af83-dd4f-4fa9-b1d9-f3e197174816-kube-api-access-nq9hd\") pod \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\" (UID: \"eee3af83-dd4f-4fa9-b1d9-f3e197174816\") " Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.819272 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee3af83-dd4f-4fa9-b1d9-f3e197174816-kube-api-access-nq9hd" (OuterVolumeSpecName: "kube-api-access-nq9hd") pod "eee3af83-dd4f-4fa9-b1d9-f3e197174816" (UID: "eee3af83-dd4f-4fa9-b1d9-f3e197174816"). InnerVolumeSpecName "kube-api-access-nq9hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.844975 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eee3af83-dd4f-4fa9-b1d9-f3e197174816" (UID: "eee3af83-dd4f-4fa9-b1d9-f3e197174816"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.846727 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-inventory" (OuterVolumeSpecName: "inventory") pod "eee3af83-dd4f-4fa9-b1d9-f3e197174816" (UID: "eee3af83-dd4f-4fa9-b1d9-f3e197174816"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.916792 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.917390 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eee3af83-dd4f-4fa9-b1d9-f3e197174816-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:55 crc kubenswrapper[4874]: I0217 16:29:55.917637 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq9hd\" (UniqueName: \"kubernetes.io/projected/eee3af83-dd4f-4fa9-b1d9-f3e197174816-kube-api-access-nq9hd\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.202328 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" event={"ID":"eee3af83-dd4f-4fa9-b1d9-f3e197174816","Type":"ContainerDied","Data":"167699a23e32c1dcd5efdc2a5eaa23f4d8274806acd9948d8326a04e7edfeedf"} Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.202390 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="167699a23e32c1dcd5efdc2a5eaa23f4d8274806acd9948d8326a04e7edfeedf" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.202494 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8hf76" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.261271 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz"] Feb 17 16:29:56 crc kubenswrapper[4874]: E0217 16:29:56.261841 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee3af83-dd4f-4fa9-b1d9-f3e197174816" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.261860 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee3af83-dd4f-4fa9-b1d9-f3e197174816" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.262125 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee3af83-dd4f-4fa9-b1d9-f3e197174816" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.262923 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.265637 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.265825 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.266451 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.266580 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dsqj4" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.278557 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz"] Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.330057 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.330215 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.330257 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.330387 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng5zf\" (UniqueName: \"kubernetes.io/projected/e27c106f-e640-4b2b-aab8-785a2bcb1624-kube-api-access-ng5zf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.432254 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.432745 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng5zf\" (UniqueName: \"kubernetes.io/projected/e27c106f-e640-4b2b-aab8-785a2bcb1624-kube-api-access-ng5zf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.432807 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.432954 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.437681 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.437824 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.438931 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.456776 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng5zf\" (UniqueName: \"kubernetes.io/projected/e27c106f-e640-4b2b-aab8-785a2bcb1624-kube-api-access-ng5zf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:56 crc kubenswrapper[4874]: I0217 16:29:56.581039 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:29:57 crc kubenswrapper[4874]: I0217 16:29:57.149100 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz"] Feb 17 16:29:57 crc kubenswrapper[4874]: I0217 16:29:57.214658 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" event={"ID":"e27c106f-e640-4b2b-aab8-785a2bcb1624","Type":"ContainerStarted","Data":"0aa36d608f63798486aeb9916518f1bf89a92ad92d395940f65b53550c02b685"} Feb 17 16:29:57 crc kubenswrapper[4874]: I0217 16:29:57.725809 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:29:57 crc kubenswrapper[4874]: I0217 16:29:57.726178 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:29:58 crc kubenswrapper[4874]: I0217 16:29:58.226717 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" event={"ID":"e27c106f-e640-4b2b-aab8-785a2bcb1624","Type":"ContainerStarted","Data":"83593ddabac5b9290a10c09071dda694cf057f63163244e86871ff4eaf5d4cee"} Feb 17 16:29:58 crc kubenswrapper[4874]: I0217 16:29:58.244847 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" podStartSLOduration=1.83800697 podStartE2EDuration="2.24482412s" podCreationTimestamp="2026-02-17 16:29:56 +0000 UTC" firstStartedPulling="2026-02-17 16:29:57.1403931 +0000 UTC m=+1607.434781671" lastFinishedPulling="2026-02-17 16:29:57.54721026 +0000 UTC m=+1607.841598821" observedRunningTime="2026-02-17 16:29:58.24318809 +0000 UTC m=+1608.537576661" watchObservedRunningTime="2026-02-17 16:29:58.24482412 +0000 UTC m=+1608.539212681" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.166590 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz"] Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.169415 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.171444 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.171960 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.189570 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz"] Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.234686 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-config-volume\") pod \"collect-profiles-29522430-pfscz\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.234933 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-secret-volume\") pod \"collect-profiles-29522430-pfscz\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.235284 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgj9\" (UniqueName: \"kubernetes.io/projected/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-kube-api-access-7zgj9\") pod \"collect-profiles-29522430-pfscz\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.337796 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-config-volume\") pod \"collect-profiles-29522430-pfscz\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.338130 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-secret-volume\") pod \"collect-profiles-29522430-pfscz\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.338209 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zgj9\" (UniqueName: \"kubernetes.io/projected/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-kube-api-access-7zgj9\") pod \"collect-profiles-29522430-pfscz\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.339375 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-config-volume\") pod \"collect-profiles-29522430-pfscz\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.344126 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-secret-volume\") pod \"collect-profiles-29522430-pfscz\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.354612 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zgj9\" (UniqueName: \"kubernetes.io/projected/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-kube-api-access-7zgj9\") pod \"collect-profiles-29522430-pfscz\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.502647 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:00 crc kubenswrapper[4874]: I0217 16:30:00.987022 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz"] Feb 17 16:30:01 crc kubenswrapper[4874]: I0217 16:30:01.272543 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" event={"ID":"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd","Type":"ContainerStarted","Data":"225f638a14b6136b1e764d41255b2dd4b91adefc0abbc2e1418dfeab2b04a460"} Feb 17 16:30:01 crc kubenswrapper[4874]: I0217 16:30:01.272924 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" event={"ID":"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd","Type":"ContainerStarted","Data":"cc97721b00ed09cc82fb9848206bc32987a839947e30191edaf85712d091956b"} Feb 17 16:30:01 crc kubenswrapper[4874]: I0217 16:30:01.296712 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" podStartSLOduration=1.296696099 podStartE2EDuration="1.296696099s" podCreationTimestamp="2026-02-17 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:30:01.290359943 +0000 UTC m=+1611.584748504" watchObservedRunningTime="2026-02-17 16:30:01.296696099 +0000 UTC m=+1611.591084660" Feb 17 16:30:02 crc kubenswrapper[4874]: I0217 16:30:02.289500 4874 generic.go:334] "Generic (PLEG): container finished" podID="7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd" containerID="225f638a14b6136b1e764d41255b2dd4b91adefc0abbc2e1418dfeab2b04a460" exitCode=0 Feb 17 16:30:02 crc kubenswrapper[4874]: I0217 16:30:02.289560 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" event={"ID":"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd","Type":"ContainerDied","Data":"225f638a14b6136b1e764d41255b2dd4b91adefc0abbc2e1418dfeab2b04a460"} Feb 17 16:30:03 crc kubenswrapper[4874]: I0217 16:30:03.712847 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:03 crc kubenswrapper[4874]: I0217 16:30:03.914646 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-config-volume\") pod \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " Feb 17 16:30:03 crc kubenswrapper[4874]: I0217 16:30:03.914715 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-secret-volume\") pod \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " Feb 17 16:30:03 crc kubenswrapper[4874]: I0217 16:30:03.914892 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zgj9\" (UniqueName: \"kubernetes.io/projected/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-kube-api-access-7zgj9\") pod \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\" (UID: \"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd\") " Feb 17 16:30:03 crc kubenswrapper[4874]: I0217 16:30:03.916701 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd" (UID: "7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:30:03 crc kubenswrapper[4874]: I0217 16:30:03.917016 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:03 crc kubenswrapper[4874]: I0217 16:30:03.922060 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd" (UID: "7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:30:03 crc kubenswrapper[4874]: I0217 16:30:03.923274 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-kube-api-access-7zgj9" (OuterVolumeSpecName: "kube-api-access-7zgj9") pod "7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd" (UID: "7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd"). InnerVolumeSpecName "kube-api-access-7zgj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4874]: I0217 16:30:04.018575 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zgj9\" (UniqueName: \"kubernetes.io/projected/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-kube-api-access-7zgj9\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4874]: I0217 16:30:04.018626 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4874]: I0217 16:30:04.315933 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" event={"ID":"7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd","Type":"ContainerDied","Data":"cc97721b00ed09cc82fb9848206bc32987a839947e30191edaf85712d091956b"} Feb 17 16:30:04 crc kubenswrapper[4874]: I0217 16:30:04.316322 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc97721b00ed09cc82fb9848206bc32987a839947e30191edaf85712d091956b" Feb 17 16:30:04 crc kubenswrapper[4874]: I0217 16:30:04.316015 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz" Feb 17 16:30:07 crc kubenswrapper[4874]: E0217 16:30:07.461054 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:30:07 crc kubenswrapper[4874]: E0217 16:30:07.831143 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:30:07 crc kubenswrapper[4874]: E0217 16:30:07.831207 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:30:07 crc kubenswrapper[4874]: E0217 16:30:07.831361 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:30:07 crc kubenswrapper[4874]: E0217 16:30:07.833115 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:30:19 crc kubenswrapper[4874]: E0217 16:30:19.460691 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:30:21 crc kubenswrapper[4874]: I0217 16:30:21.535355 4874 generic.go:334] "Generic (PLEG): container finished" podID="aafddb04-57ad-45b6-8a34-30898a8bafff" containerID="b2cb79780409d806b4457f08325f6e3c4718c37c9f2e4fb1ef1a3ab769e599c0" exitCode=0 Feb 17 16:30:21 crc kubenswrapper[4874]: I0217 16:30:21.535430 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"aafddb04-57ad-45b6-8a34-30898a8bafff","Type":"ContainerDied","Data":"b2cb79780409d806b4457f08325f6e3c4718c37c9f2e4fb1ef1a3ab769e599c0"} Feb 17 16:30:22 crc kubenswrapper[4874]: E0217 16:30:22.492905 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:30:22 crc kubenswrapper[4874]: I0217 16:30:22.547846 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"aafddb04-57ad-45b6-8a34-30898a8bafff","Type":"ContainerStarted","Data":"2e6385230ec56f66d237ab3800b4fdb538c9defa91366927a7c51fc1cef8e469"} Feb 17 16:30:22 crc kubenswrapper[4874]: I0217 16:30:22.548106 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 17 16:30:22 crc kubenswrapper[4874]: I0217 16:30:22.576416 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.576367279 podStartE2EDuration="37.576367279s" podCreationTimestamp="2026-02-17 16:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:30:22.570033532 +0000 UTC m=+1632.864422103" watchObservedRunningTime="2026-02-17 16:30:22.576367279 +0000 UTC m=+1632.870755850" Feb 17 16:30:25 crc kubenswrapper[4874]: I0217 16:30:25.034255 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-jgrrk" podUID="9fdb9bed-5948-4441-a15b-34df4351b88c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:30:27 crc kubenswrapper[4874]: I0217 16:30:27.725006 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:30:27 crc kubenswrapper[4874]: I0217 16:30:27.725579 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:30:31 crc kubenswrapper[4874]: E0217 16:30:31.459613 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:30:33 crc kubenswrapper[4874]: E0217 16:30:33.461345 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:30:35 crc kubenswrapper[4874]: I0217 16:30:35.472431 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 17 16:30:35 crc kubenswrapper[4874]: I0217 16:30:35.540536 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:30:39 crc kubenswrapper[4874]: I0217 16:30:39.629945 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" containerName="rabbitmq" containerID="cri-o://4b23f8baba9f1aa2ef43c2262a378fd2738a7a42f7e7dfa96e62d4362102dde4" gracePeriod=604796 Feb 17 16:30:43 crc kubenswrapper[4874]: I0217 16:30:43.778465 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 17 16:30:44 crc kubenswrapper[4874]: E0217 16:30:44.459880 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:30:45 crc kubenswrapper[4874]: I0217 16:30:45.868650 4874 scope.go:117] "RemoveContainer" containerID="18ec7ab0bd9add1ef4f51bfcd2a4d3060c430cfb4130a2dab2d3a469e25fbb17" Feb 17 16:30:45 crc kubenswrapper[4874]: I0217 16:30:45.942124 4874 scope.go:117] "RemoveContainer" containerID="15990092c92445fbcc169c1a91cef0c94d89178fd15d1a451c63e9fab92a6145" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.387753 4874 generic.go:334] "Generic (PLEG): container finished" podID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" containerID="4b23f8baba9f1aa2ef43c2262a378fd2738a7a42f7e7dfa96e62d4362102dde4" exitCode=0 Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.387834 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7eb994d5-6ecb-4a2d-bafc-86c9f107802c","Type":"ContainerDied","Data":"4b23f8baba9f1aa2ef43c2262a378fd2738a7a42f7e7dfa96e62d4362102dde4"} Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.388123 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7eb994d5-6ecb-4a2d-bafc-86c9f107802c","Type":"ContainerDied","Data":"97f0e93e129651a61d084af71536450c5ecce88efe1a23e5f011bf9f6280dbc1"} Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.388138 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97f0e93e129651a61d084af71536450c5ecce88efe1a23e5f011bf9f6280dbc1" Feb 17 16:30:46 crc kubenswrapper[4874]: E0217 16:30:46.459786 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.482352 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.594737 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-erlang-cookie\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.594800 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-erlang-cookie-secret\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.594872 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-config-data\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.594902 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-server-conf\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.594984 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-confd\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.595096 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-pod-info\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.595750 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.595851 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wgvh\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-kube-api-access-6wgvh\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.595884 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-tls\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.595921 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-plugins\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.596035 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-plugins-conf\") pod \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\" (UID: \"7eb994d5-6ecb-4a2d-bafc-86c9f107802c\") " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.596649 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.597262 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.612624 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-pod-info" (OuterVolumeSpecName: "pod-info") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.613312 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.618110 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.621647 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-kube-api-access-6wgvh" (OuterVolumeSpecName: "kube-api-access-6wgvh") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "kube-api-access-6wgvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.625316 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.628650 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.645702 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-config-data" (OuterVolumeSpecName: "config-data") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.655944 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43" (OuterVolumeSpecName: "persistence") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.679936 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-server-conf" (OuterVolumeSpecName: "server-conf") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.699510 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.699546 4874 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.699556 4874 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.699591 4874 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") on node \"crc\" " Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.699603 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wgvh\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-kube-api-access-6wgvh\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.699612 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.699620 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.699627 4874 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.699647 4874 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.747571 4874 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.747740 4874 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43") on node "crc" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.783651 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7eb994d5-6ecb-4a2d-bafc-86c9f107802c" (UID: "7eb994d5-6ecb-4a2d-bafc-86c9f107802c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.802140 4874 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7eb994d5-6ecb-4a2d-bafc-86c9f107802c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:46 crc kubenswrapper[4874]: I0217 16:30:46.802171 4874 reconciler_common.go:293] "Volume detached for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.396945 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.438705 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.456107 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.467320 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:30:47 crc kubenswrapper[4874]: E0217 16:30:47.467870 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" containerName="rabbitmq" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.467900 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" containerName="rabbitmq" Feb 17 16:30:47 crc kubenswrapper[4874]: E0217 16:30:47.467939 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" containerName="setup-container" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.467949 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" containerName="setup-container" Feb 17 16:30:47 crc kubenswrapper[4874]: E0217 16:30:47.467971 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd" containerName="collect-profiles" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.467979 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd" containerName="collect-profiles" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.468302 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" containerName="rabbitmq" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.468336 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd" containerName="collect-profiles" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.470450 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.508581 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.620768 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d60895a-5f07-4e03-8f98-dc92137c65d4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621054 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621090 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljn2r\" (UniqueName: \"kubernetes.io/projected/7d60895a-5f07-4e03-8f98-dc92137c65d4-kube-api-access-ljn2r\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621130 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621215 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621248 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621274 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621316 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d60895a-5f07-4e03-8f98-dc92137c65d4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621347 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d60895a-5f07-4e03-8f98-dc92137c65d4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621372 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d60895a-5f07-4e03-8f98-dc92137c65d4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.621401 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d60895a-5f07-4e03-8f98-dc92137c65d4-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724134 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d60895a-5f07-4e03-8f98-dc92137c65d4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724501 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d60895a-5f07-4e03-8f98-dc92137c65d4-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724538 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d60895a-5f07-4e03-8f98-dc92137c65d4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724590 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724614 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljn2r\" (UniqueName: \"kubernetes.io/projected/7d60895a-5f07-4e03-8f98-dc92137c65d4-kube-api-access-ljn2r\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724650 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724776 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724812 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724841 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724883 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d60895a-5f07-4e03-8f98-dc92137c65d4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.724915 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d60895a-5f07-4e03-8f98-dc92137c65d4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.725054 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d60895a-5f07-4e03-8f98-dc92137c65d4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.725401 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d60895a-5f07-4e03-8f98-dc92137c65d4-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.725750 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.729982 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d60895a-5f07-4e03-8f98-dc92137c65d4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.730480 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.730979 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d60895a-5f07-4e03-8f98-dc92137c65d4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.731621 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.733438 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d60895a-5f07-4e03-8f98-dc92137c65d4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.753538 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljn2r\" (UniqueName: \"kubernetes.io/projected/7d60895a-5f07-4e03-8f98-dc92137c65d4-kube-api-access-ljn2r\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.757207 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.757290 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e362c98f195cf3c54688be96913e676fce2e6ab946b229430e7647a6c41b42f7/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.769759 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d60895a-5f07-4e03-8f98-dc92137c65d4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:47 crc kubenswrapper[4874]: I0217 16:30:47.840570 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-83b0cf46-3df6-4f4c-aaa1-dad5e35fce43\") pod \"rabbitmq-server-0\" (UID: \"7d60895a-5f07-4e03-8f98-dc92137c65d4\") " pod="openstack/rabbitmq-server-0" Feb 17 16:30:48 crc kubenswrapper[4874]: I0217 16:30:48.097675 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:30:48 crc kubenswrapper[4874]: I0217 16:30:48.474508 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb994d5-6ecb-4a2d-bafc-86c9f107802c" path="/var/lib/kubelet/pods/7eb994d5-6ecb-4a2d-bafc-86c9f107802c/volumes" Feb 17 16:30:48 crc kubenswrapper[4874]: I0217 16:30:48.800378 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:30:49 crc kubenswrapper[4874]: I0217 16:30:49.422847 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d60895a-5f07-4e03-8f98-dc92137c65d4","Type":"ContainerStarted","Data":"9f2e1dd7fdd44abd4a7f370c5d9b2ae9cff46c044f78e29ddcba4fffa830883e"} Feb 17 16:30:51 crc kubenswrapper[4874]: I0217 16:30:51.446808 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d60895a-5f07-4e03-8f98-dc92137c65d4","Type":"ContainerStarted","Data":"cccc2b97ebcf5483c1c37afea8255ebe085e563c1cea44fdd1da09e5539cfac9"} Feb 17 16:30:57 crc kubenswrapper[4874]: I0217 16:30:57.725892 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:30:57 crc kubenswrapper[4874]: I0217 16:30:57.726288 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:30:57 crc kubenswrapper[4874]: I0217 16:30:57.727170 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:30:57 crc kubenswrapper[4874]: I0217 16:30:57.728155 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:30:57 crc kubenswrapper[4874]: I0217 16:30:57.728257 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" gracePeriod=600 Feb 17 16:30:58 crc kubenswrapper[4874]: E0217 16:30:58.414936 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:30:58 crc kubenswrapper[4874]: I0217 16:30:58.561292 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" exitCode=0 Feb 17 16:30:58 crc kubenswrapper[4874]: I0217 16:30:58.561348 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e"} Feb 17 16:30:58 crc kubenswrapper[4874]: I0217 16:30:58.561387 4874 scope.go:117] "RemoveContainer" containerID="1b677238bae66091e799ac761dff49995ef4eec3d7982bfb5c634aa828596a1e" Feb 17 16:30:58 crc kubenswrapper[4874]: I0217 16:30:58.562496 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:30:58 crc kubenswrapper[4874]: E0217 16:30:58.562856 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:30:59 crc kubenswrapper[4874]: E0217 16:30:59.459771 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:30:59 crc kubenswrapper[4874]: E0217 16:30:59.459774 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:31:11 crc kubenswrapper[4874]: I0217 16:31:11.456981 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:31:11 crc kubenswrapper[4874]: E0217 16:31:11.457778 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:31:13 crc kubenswrapper[4874]: E0217 16:31:13.459862 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:31:14 crc kubenswrapper[4874]: E0217 16:31:14.459698 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:31:23 crc kubenswrapper[4874]: I0217 16:31:23.899252 4874 generic.go:334] "Generic (PLEG): container finished" podID="7d60895a-5f07-4e03-8f98-dc92137c65d4" containerID="cccc2b97ebcf5483c1c37afea8255ebe085e563c1cea44fdd1da09e5539cfac9" exitCode=0 Feb 17 16:31:23 crc kubenswrapper[4874]: I0217 16:31:23.899373 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d60895a-5f07-4e03-8f98-dc92137c65d4","Type":"ContainerDied","Data":"cccc2b97ebcf5483c1c37afea8255ebe085e563c1cea44fdd1da09e5539cfac9"} Feb 17 16:31:24 crc kubenswrapper[4874]: I0217 16:31:24.918856 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d60895a-5f07-4e03-8f98-dc92137c65d4","Type":"ContainerStarted","Data":"bd6cbdac9d7c4b7e68fbdb1c4b14ad4dcf98c0029730f8464c581cdaf62b5718"} Feb 17 16:31:24 crc kubenswrapper[4874]: I0217 16:31:24.919500 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:31:24 crc kubenswrapper[4874]: I0217 16:31:24.965644 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.965623562 podStartE2EDuration="37.965623562s" podCreationTimestamp="2026-02-17 16:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:31:24.95661047 +0000 UTC m=+1695.250999051" watchObservedRunningTime="2026-02-17 16:31:24.965623562 +0000 UTC m=+1695.260012133" Feb 17 16:31:26 crc kubenswrapper[4874]: I0217 16:31:26.458063 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:31:26 crc kubenswrapper[4874]: E0217 16:31:26.458653 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:31:27 crc kubenswrapper[4874]: E0217 16:31:27.598378 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:31:27 crc kubenswrapper[4874]: E0217 16:31:27.598483 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:31:27 crc kubenswrapper[4874]: E0217 16:31:27.598688 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:31:27 crc kubenswrapper[4874]: E0217 16:31:27.599969 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:31:28 crc kubenswrapper[4874]: E0217 16:31:28.585822 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:31:28 crc kubenswrapper[4874]: E0217 16:31:28.586261 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:31:28 crc kubenswrapper[4874]: E0217 16:31:28.586445 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:31:28 crc kubenswrapper[4874]: E0217 16:31:28.587539 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:31:38 crc kubenswrapper[4874]: I0217 16:31:38.101309 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:31:38 crc kubenswrapper[4874]: I0217 16:31:38.457399 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:31:38 crc kubenswrapper[4874]: E0217 16:31:38.457888 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:31:39 crc kubenswrapper[4874]: E0217 16:31:39.460025 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:31:41 crc kubenswrapper[4874]: E0217 16:31:41.460398 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:31:46 crc kubenswrapper[4874]: I0217 16:31:46.085960 4874 scope.go:117] "RemoveContainer" containerID="881abe776c2cd1d2fab8953a0b4b3a0b79ac042390adfc97d6a55128a6da4f1f" Feb 17 16:31:46 crc kubenswrapper[4874]: I0217 16:31:46.115638 4874 scope.go:117] "RemoveContainer" containerID="f2e33688b30c443d773430732d2e5c8308fe165bbe45e37c812a398e3815bcbc" Feb 17 16:31:46 crc kubenswrapper[4874]: I0217 16:31:46.142693 4874 scope.go:117] "RemoveContainer" containerID="a034db3c1ea552620fa0691a9a874a0d6c8f47608b6b427f485aa0e509c86b20" Feb 17 16:31:46 crc kubenswrapper[4874]: I0217 16:31:46.175911 4874 scope.go:117] "RemoveContainer" containerID="77ef09ba26fdd2e92436f06fe8cd8993b60b4e40e13de49726732fd41ac660e4" Feb 17 16:31:46 crc kubenswrapper[4874]: I0217 16:31:46.245945 4874 scope.go:117] "RemoveContainer" containerID="4b23f8baba9f1aa2ef43c2262a378fd2738a7a42f7e7dfa96e62d4362102dde4" Feb 17 16:31:51 crc kubenswrapper[4874]: I0217 16:31:51.457853 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:31:51 crc kubenswrapper[4874]: E0217 16:31:51.458590 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:31:54 crc kubenswrapper[4874]: E0217 16:31:54.462069 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:31:56 crc kubenswrapper[4874]: E0217 16:31:56.458773 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:32:05 crc kubenswrapper[4874]: I0217 16:32:05.457559 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:32:05 crc kubenswrapper[4874]: E0217 16:32:05.458620 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:32:06 crc kubenswrapper[4874]: E0217 16:32:06.460665 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:32:09 crc kubenswrapper[4874]: E0217 16:32:09.468845 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:32:16 crc kubenswrapper[4874]: I0217 16:32:16.459111 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:32:16 crc kubenswrapper[4874]: E0217 16:32:16.460850 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:32:17 crc kubenswrapper[4874]: E0217 16:32:17.461972 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:32:24 crc kubenswrapper[4874]: E0217 16:32:24.462069 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:32:28 crc kubenswrapper[4874]: E0217 16:32:28.459920 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:32:31 crc kubenswrapper[4874]: I0217 16:32:31.457273 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:32:31 crc kubenswrapper[4874]: E0217 16:32:31.457961 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:32:38 crc kubenswrapper[4874]: E0217 16:32:38.460098 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:32:41 crc kubenswrapper[4874]: E0217 16:32:41.460291 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:32:46 crc kubenswrapper[4874]: I0217 16:32:46.458500 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:32:46 crc kubenswrapper[4874]: E0217 16:32:46.459511 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:32:49 crc kubenswrapper[4874]: E0217 16:32:49.458852 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:32:56 crc kubenswrapper[4874]: E0217 16:32:56.459619 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:33:01 crc kubenswrapper[4874]: I0217 16:33:01.457341 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:33:01 crc kubenswrapper[4874]: E0217 16:33:01.458098 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:33:01 crc kubenswrapper[4874]: E0217 16:33:01.461107 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:33:02 crc kubenswrapper[4874]: I0217 16:33:02.208679 4874 generic.go:334] "Generic (PLEG): container finished" podID="e27c106f-e640-4b2b-aab8-785a2bcb1624" containerID="83593ddabac5b9290a10c09071dda694cf057f63163244e86871ff4eaf5d4cee" exitCode=0 Feb 17 16:33:02 crc kubenswrapper[4874]: I0217 16:33:02.208723 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" event={"ID":"e27c106f-e640-4b2b-aab8-785a2bcb1624","Type":"ContainerDied","Data":"83593ddabac5b9290a10c09071dda694cf057f63163244e86871ff4eaf5d4cee"} Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.129323 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.224739 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng5zf\" (UniqueName: \"kubernetes.io/projected/e27c106f-e640-4b2b-aab8-785a2bcb1624-kube-api-access-ng5zf\") pod \"e27c106f-e640-4b2b-aab8-785a2bcb1624\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.224798 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-inventory\") pod \"e27c106f-e640-4b2b-aab8-785a2bcb1624\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.224898 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-bootstrap-combined-ca-bundle\") pod \"e27c106f-e640-4b2b-aab8-785a2bcb1624\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.224917 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-ssh-key-openstack-edpm-ipam\") pod \"e27c106f-e640-4b2b-aab8-785a2bcb1624\" (UID: \"e27c106f-e640-4b2b-aab8-785a2bcb1624\") " Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.231526 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" event={"ID":"e27c106f-e640-4b2b-aab8-785a2bcb1624","Type":"ContainerDied","Data":"0aa36d608f63798486aeb9916518f1bf89a92ad92d395940f65b53550c02b685"} Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.231562 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa36d608f63798486aeb9916518f1bf89a92ad92d395940f65b53550c02b685" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.231572 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.231977 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e27c106f-e640-4b2b-aab8-785a2bcb1624" (UID: "e27c106f-e640-4b2b-aab8-785a2bcb1624"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.233893 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27c106f-e640-4b2b-aab8-785a2bcb1624-kube-api-access-ng5zf" (OuterVolumeSpecName: "kube-api-access-ng5zf") pod "e27c106f-e640-4b2b-aab8-785a2bcb1624" (UID: "e27c106f-e640-4b2b-aab8-785a2bcb1624"). InnerVolumeSpecName "kube-api-access-ng5zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.294108 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-inventory" (OuterVolumeSpecName: "inventory") pod "e27c106f-e640-4b2b-aab8-785a2bcb1624" (UID: "e27c106f-e640-4b2b-aab8-785a2bcb1624"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.314353 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e27c106f-e640-4b2b-aab8-785a2bcb1624" (UID: "e27c106f-e640-4b2b-aab8-785a2bcb1624"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.327872 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng5zf\" (UniqueName: \"kubernetes.io/projected/e27c106f-e640-4b2b-aab8-785a2bcb1624-kube-api-access-ng5zf\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.327910 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.327923 4874 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.327936 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e27c106f-e640-4b2b-aab8-785a2bcb1624-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.337195 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67"] Feb 17 16:33:04 crc kubenswrapper[4874]: E0217 16:33:04.337702 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27c106f-e640-4b2b-aab8-785a2bcb1624" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.337720 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27c106f-e640-4b2b-aab8-785a2bcb1624" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.337975 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27c106f-e640-4b2b-aab8-785a2bcb1624" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.338808 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.402994 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67"] Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.429774 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwhfr\" (UniqueName: \"kubernetes.io/projected/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-kube-api-access-fwhfr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfn67\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.429833 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfn67\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.430316 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfn67\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.499380 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9jlmf"] Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.502525 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.515452 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jlmf"] Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.532024 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfn67\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.532145 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwhfr\" (UniqueName: \"kubernetes.io/projected/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-kube-api-access-fwhfr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfn67\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.532188 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfn67\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.537176 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfn67\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.539802 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfn67\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.555877 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwhfr\" (UniqueName: \"kubernetes.io/projected/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-kube-api-access-fwhfr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pfn67\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.636640 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-utilities\") pod \"certified-operators-9jlmf\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.637318 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-catalog-content\") pod \"certified-operators-9jlmf\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.637364 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzv24\" (UniqueName: \"kubernetes.io/projected/ecf6036a-e287-4aca-a64b-6fc968b5c915-kube-api-access-wzv24\") pod \"certified-operators-9jlmf\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.690300 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bzhq2"] Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.693174 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.722126 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.726420 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzhq2"] Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.740308 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-utilities\") pod \"certified-operators-9jlmf\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.740471 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-catalog-content\") pod \"certified-operators-9jlmf\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.740521 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzv24\" (UniqueName: \"kubernetes.io/projected/ecf6036a-e287-4aca-a64b-6fc968b5c915-kube-api-access-wzv24\") pod \"certified-operators-9jlmf\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.740837 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-utilities\") pod \"certified-operators-9jlmf\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.741116 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-catalog-content\") pod \"certified-operators-9jlmf\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.765904 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzv24\" (UniqueName: \"kubernetes.io/projected/ecf6036a-e287-4aca-a64b-6fc968b5c915-kube-api-access-wzv24\") pod \"certified-operators-9jlmf\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.828661 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.890617 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9swdb\" (UniqueName: \"kubernetes.io/projected/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-kube-api-access-9swdb\") pod \"community-operators-bzhq2\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.891376 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-utilities\") pod \"community-operators-bzhq2\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.891553 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-catalog-content\") pod \"community-operators-bzhq2\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.996364 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9swdb\" (UniqueName: \"kubernetes.io/projected/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-kube-api-access-9swdb\") pod \"community-operators-bzhq2\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.996556 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-utilities\") pod \"community-operators-bzhq2\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:04 crc kubenswrapper[4874]: I0217 16:33:04.996585 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-catalog-content\") pod \"community-operators-bzhq2\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:05 crc kubenswrapper[4874]: I0217 16:33:05.003742 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-utilities\") pod \"community-operators-bzhq2\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:05 crc kubenswrapper[4874]: I0217 16:33:05.004012 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-catalog-content\") pod \"community-operators-bzhq2\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:05 crc kubenswrapper[4874]: I0217 16:33:05.036951 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9swdb\" (UniqueName: \"kubernetes.io/projected/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-kube-api-access-9swdb\") pod \"community-operators-bzhq2\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:05 crc kubenswrapper[4874]: I0217 16:33:05.329665 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:05 crc kubenswrapper[4874]: I0217 16:33:05.348110 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67"] Feb 17 16:33:05 crc kubenswrapper[4874]: I0217 16:33:05.453793 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jlmf"] Feb 17 16:33:05 crc kubenswrapper[4874]: I0217 16:33:05.922919 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzhq2"] Feb 17 16:33:06 crc kubenswrapper[4874]: I0217 16:33:06.268414 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzhq2" event={"ID":"996859f2-a1cc-42e5-9ea0-45f26ae8fde3","Type":"ContainerStarted","Data":"1a8f348406fb99ebf268dae8d9313c76a1ff55a9c4399d83ac415adbfd7f8603"} Feb 17 16:33:06 crc kubenswrapper[4874]: I0217 16:33:06.270923 4874 generic.go:334] "Generic (PLEG): container finished" podID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerID="dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2" exitCode=0 Feb 17 16:33:06 crc kubenswrapper[4874]: I0217 16:33:06.271253 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jlmf" event={"ID":"ecf6036a-e287-4aca-a64b-6fc968b5c915","Type":"ContainerDied","Data":"dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2"} Feb 17 16:33:06 crc kubenswrapper[4874]: I0217 16:33:06.271416 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jlmf" event={"ID":"ecf6036a-e287-4aca-a64b-6fc968b5c915","Type":"ContainerStarted","Data":"3a7ac4d1aff08bbd1acdd014ba60648cef0256dce18052fb46fea9d2938a9e03"} Feb 17 16:33:06 crc kubenswrapper[4874]: I0217 16:33:06.290241 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" event={"ID":"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c","Type":"ContainerStarted","Data":"a9a4784f48fa660bc0cc8eead830543fe7bd0f215619e4b18ae3cc3d86c0b0ac"} Feb 17 16:33:07 crc kubenswrapper[4874]: I0217 16:33:07.302428 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" event={"ID":"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c","Type":"ContainerStarted","Data":"a0c3c013513f32280ffde5ee0ef69a1e4ac5611dd3e7d50777cff55a4bb0ff33"} Feb 17 16:33:07 crc kubenswrapper[4874]: I0217 16:33:07.305759 4874 generic.go:334] "Generic (PLEG): container finished" podID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerID="0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6" exitCode=0 Feb 17 16:33:07 crc kubenswrapper[4874]: I0217 16:33:07.305791 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzhq2" event={"ID":"996859f2-a1cc-42e5-9ea0-45f26ae8fde3","Type":"ContainerDied","Data":"0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6"} Feb 17 16:33:07 crc kubenswrapper[4874]: I0217 16:33:07.322374 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" podStartSLOduration=2.839381311 podStartE2EDuration="3.322356374s" podCreationTimestamp="2026-02-17 16:33:04 +0000 UTC" firstStartedPulling="2026-02-17 16:33:05.368317586 +0000 UTC m=+1795.662706147" lastFinishedPulling="2026-02-17 16:33:05.851292649 +0000 UTC m=+1796.145681210" observedRunningTime="2026-02-17 16:33:07.314141381 +0000 UTC m=+1797.608529972" watchObservedRunningTime="2026-02-17 16:33:07.322356374 +0000 UTC m=+1797.616744945" Feb 17 16:33:07 crc kubenswrapper[4874]: E0217 16:33:07.460482 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:33:08 crc kubenswrapper[4874]: I0217 16:33:08.323638 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jlmf" event={"ID":"ecf6036a-e287-4aca-a64b-6fc968b5c915","Type":"ContainerStarted","Data":"b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e"} Feb 17 16:33:09 crc kubenswrapper[4874]: I0217 16:33:09.340571 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzhq2" event={"ID":"996859f2-a1cc-42e5-9ea0-45f26ae8fde3","Type":"ContainerStarted","Data":"091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7"} Feb 17 16:33:10 crc kubenswrapper[4874]: I0217 16:33:10.353276 4874 generic.go:334] "Generic (PLEG): container finished" podID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerID="b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e" exitCode=0 Feb 17 16:33:10 crc kubenswrapper[4874]: I0217 16:33:10.353354 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jlmf" event={"ID":"ecf6036a-e287-4aca-a64b-6fc968b5c915","Type":"ContainerDied","Data":"b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e"} Feb 17 16:33:11 crc kubenswrapper[4874]: I0217 16:33:11.372286 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jlmf" event={"ID":"ecf6036a-e287-4aca-a64b-6fc968b5c915","Type":"ContainerStarted","Data":"1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616"} Feb 17 16:33:11 crc kubenswrapper[4874]: I0217 16:33:11.375182 4874 generic.go:334] "Generic (PLEG): container finished" podID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerID="091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7" exitCode=0 Feb 17 16:33:11 crc kubenswrapper[4874]: I0217 16:33:11.375228 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzhq2" event={"ID":"996859f2-a1cc-42e5-9ea0-45f26ae8fde3","Type":"ContainerDied","Data":"091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7"} Feb 17 16:33:11 crc kubenswrapper[4874]: I0217 16:33:11.377324 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:33:11 crc kubenswrapper[4874]: I0217 16:33:11.395115 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9jlmf" podStartSLOduration=2.625858457 podStartE2EDuration="7.395069528s" podCreationTimestamp="2026-02-17 16:33:04 +0000 UTC" firstStartedPulling="2026-02-17 16:33:06.274894405 +0000 UTC m=+1796.569282956" lastFinishedPulling="2026-02-17 16:33:11.044105466 +0000 UTC m=+1801.338494027" observedRunningTime="2026-02-17 16:33:11.389267764 +0000 UTC m=+1801.683656335" watchObservedRunningTime="2026-02-17 16:33:11.395069528 +0000 UTC m=+1801.689458099" Feb 17 16:33:11 crc kubenswrapper[4874]: E0217 16:33:11.619950 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:12 crc kubenswrapper[4874]: I0217 16:33:12.389736 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzhq2" event={"ID":"996859f2-a1cc-42e5-9ea0-45f26ae8fde3","Type":"ContainerStarted","Data":"e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6"} Feb 17 16:33:12 crc kubenswrapper[4874]: I0217 16:33:12.416242 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bzhq2" podStartSLOduration=3.9260621799999997 podStartE2EDuration="8.416219567s" podCreationTimestamp="2026-02-17 16:33:04 +0000 UTC" firstStartedPulling="2026-02-17 16:33:07.309804504 +0000 UTC m=+1797.604193065" lastFinishedPulling="2026-02-17 16:33:11.799961851 +0000 UTC m=+1802.094350452" observedRunningTime="2026-02-17 16:33:12.40822635 +0000 UTC m=+1802.702614941" watchObservedRunningTime="2026-02-17 16:33:12.416219567 +0000 UTC m=+1802.710608138" Feb 17 16:33:13 crc kubenswrapper[4874]: E0217 16:33:13.560456 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:14 crc kubenswrapper[4874]: I0217 16:33:14.830261 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:14 crc kubenswrapper[4874]: I0217 16:33:14.830330 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:15 crc kubenswrapper[4874]: I0217 16:33:15.330400 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:15 crc kubenswrapper[4874]: I0217 16:33:15.330442 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:15 crc kubenswrapper[4874]: I0217 16:33:15.879135 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9jlmf" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="registry-server" probeResult="failure" output=< Feb 17 16:33:15 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:33:15 crc kubenswrapper[4874]: > Feb 17 16:33:16 crc kubenswrapper[4874]: I0217 16:33:16.380953 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bzhq2" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerName="registry-server" probeResult="failure" output=< Feb 17 16:33:16 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:33:16 crc kubenswrapper[4874]: > Feb 17 16:33:16 crc kubenswrapper[4874]: I0217 16:33:16.458729 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:33:16 crc kubenswrapper[4874]: E0217 16:33:16.459521 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:33:16 crc kubenswrapper[4874]: E0217 16:33:16.465424 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:33:21 crc kubenswrapper[4874]: E0217 16:33:21.462294 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:33:21 crc kubenswrapper[4874]: E0217 16:33:21.894158 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:22 crc kubenswrapper[4874]: I0217 16:33:22.044722 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rthxd"] Feb 17 16:33:22 crc kubenswrapper[4874]: I0217 16:33:22.055895 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rthxd"] Feb 17 16:33:22 crc kubenswrapper[4874]: I0217 16:33:22.469056 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35de0e21-b2b6-482c-a5b0-01b20b85fd46" path="/var/lib/kubelet/pods/35de0e21-b2b6-482c-a5b0-01b20b85fd46/volumes" Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.060021 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gztng"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.074933 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-q8x4r"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.088929 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-1f5a-account-create-update-c9qms"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.100465 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4j7m8"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.124399 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2582-account-create-update-h89p2"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.140443 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gztng"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.152031 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-q8x4r"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.163053 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-1f5a-account-create-update-c9qms"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.175367 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2582-account-create-update-h89p2"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.185998 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4j7m8"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.195334 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-fcfd-account-create-update-gzpln"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.205419 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-fcfd-account-create-update-gzpln"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.215799 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f5c3-account-create-update-s4xs5"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.228995 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f5c3-account-create-update-s4xs5"] Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.470163 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a736ae-9a4f-4803-ade8-2088a03e9b75" path="/var/lib/kubelet/pods/16a736ae-9a4f-4803-ade8-2088a03e9b75/volumes" Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.473245 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39726753-57c2-4de7-91a2-c0f60e799ea9" path="/var/lib/kubelet/pods/39726753-57c2-4de7-91a2-c0f60e799ea9/volumes" Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.473814 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0a8f96-f93d-4a9f-b191-76cfd2cab069" path="/var/lib/kubelet/pods/5b0a8f96-f93d-4a9f-b191-76cfd2cab069/volumes" Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.474705 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3" path="/var/lib/kubelet/pods/77ba04c2-b4a6-4ce7-b644-2c1a58c18ba3/volumes" Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.477164 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a138fbf-e69e-4981-a7f0-b399fbbb7088" path="/var/lib/kubelet/pods/7a138fbf-e69e-4981-a7f0-b399fbbb7088/volumes" Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.478424 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e5efee-d739-4300-bc49-181df5481246" path="/var/lib/kubelet/pods/82e5efee-d739-4300-bc49-181df5481246/volumes" Feb 17 16:33:24 crc kubenswrapper[4874]: I0217 16:33:24.479469 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af707444-663f-458c-a1a2-88d51f97bc68" path="/var/lib/kubelet/pods/af707444-663f-458c-a1a2-88d51f97bc68/volumes" Feb 17 16:33:25 crc kubenswrapper[4874]: I0217 16:33:25.385653 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:25 crc kubenswrapper[4874]: I0217 16:33:25.433542 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:25 crc kubenswrapper[4874]: I0217 16:33:25.628937 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzhq2"] Feb 17 16:33:25 crc kubenswrapper[4874]: I0217 16:33:25.893521 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9jlmf" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="registry-server" probeResult="failure" output=< Feb 17 16:33:25 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:33:25 crc kubenswrapper[4874]: > Feb 17 16:33:26 crc kubenswrapper[4874]: I0217 16:33:26.578272 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bzhq2" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerName="registry-server" containerID="cri-o://e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6" gracePeriod=2 Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.077494 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.241779 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9swdb\" (UniqueName: \"kubernetes.io/projected/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-kube-api-access-9swdb\") pod \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.241904 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-utilities\") pod \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.241995 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-catalog-content\") pod \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\" (UID: \"996859f2-a1cc-42e5-9ea0-45f26ae8fde3\") " Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.242765 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-utilities" (OuterVolumeSpecName: "utilities") pod "996859f2-a1cc-42e5-9ea0-45f26ae8fde3" (UID: "996859f2-a1cc-42e5-9ea0-45f26ae8fde3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.254863 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-kube-api-access-9swdb" (OuterVolumeSpecName: "kube-api-access-9swdb") pod "996859f2-a1cc-42e5-9ea0-45f26ae8fde3" (UID: "996859f2-a1cc-42e5-9ea0-45f26ae8fde3"). InnerVolumeSpecName "kube-api-access-9swdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.301503 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "996859f2-a1cc-42e5-9ea0-45f26ae8fde3" (UID: "996859f2-a1cc-42e5-9ea0-45f26ae8fde3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.346152 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9swdb\" (UniqueName: \"kubernetes.io/projected/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-kube-api-access-9swdb\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.346193 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.346212 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996859f2-a1cc-42e5-9ea0-45f26ae8fde3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.593412 4874 generic.go:334] "Generic (PLEG): container finished" podID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerID="e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6" exitCode=0 Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.593453 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzhq2" event={"ID":"996859f2-a1cc-42e5-9ea0-45f26ae8fde3","Type":"ContainerDied","Data":"e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6"} Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.593498 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzhq2" event={"ID":"996859f2-a1cc-42e5-9ea0-45f26ae8fde3","Type":"ContainerDied","Data":"1a8f348406fb99ebf268dae8d9313c76a1ff55a9c4399d83ac415adbfd7f8603"} Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.593514 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzhq2" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.593519 4874 scope.go:117] "RemoveContainer" containerID="e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.624913 4874 scope.go:117] "RemoveContainer" containerID="091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.651042 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzhq2"] Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.661645 4874 scope.go:117] "RemoveContainer" containerID="0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.669932 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bzhq2"] Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.711617 4874 scope.go:117] "RemoveContainer" containerID="e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6" Feb 17 16:33:27 crc kubenswrapper[4874]: E0217 16:33:27.712290 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6\": container with ID starting with e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6 not found: ID does not exist" containerID="e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.712334 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6"} err="failed to get container status \"e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6\": rpc error: code = NotFound desc = could not find container \"e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6\": container with ID starting with e1f8a881d71699ed6dcdfac3298c06079d979616afb39c3977259e014ffab4f6 not found: ID does not exist" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.712360 4874 scope.go:117] "RemoveContainer" containerID="091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7" Feb 17 16:33:27 crc kubenswrapper[4874]: E0217 16:33:27.713090 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7\": container with ID starting with 091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7 not found: ID does not exist" containerID="091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.713131 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7"} err="failed to get container status \"091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7\": rpc error: code = NotFound desc = could not find container \"091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7\": container with ID starting with 091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7 not found: ID does not exist" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.713157 4874 scope.go:117] "RemoveContainer" containerID="0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6" Feb 17 16:33:27 crc kubenswrapper[4874]: E0217 16:33:27.713951 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6\": container with ID starting with 0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6 not found: ID does not exist" containerID="0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6" Feb 17 16:33:27 crc kubenswrapper[4874]: I0217 16:33:27.713984 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6"} err="failed to get container status \"0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6\": rpc error: code = NotFound desc = could not find container \"0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6\": container with ID starting with 0d11a69ff51dbf7a0dcd5b598a163516d2ec2c782fb5846e593fb53ce16a47c6 not found: ID does not exist" Feb 17 16:33:28 crc kubenswrapper[4874]: I0217 16:33:28.470741 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" path="/var/lib/kubelet/pods/996859f2-a1cc-42e5-9ea0-45f26ae8fde3/volumes" Feb 17 16:33:28 crc kubenswrapper[4874]: E0217 16:33:28.824615 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:29 crc kubenswrapper[4874]: I0217 16:33:29.457262 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:33:29 crc kubenswrapper[4874]: E0217 16:33:29.457862 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:33:30 crc kubenswrapper[4874]: E0217 16:33:30.468389 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:33:31 crc kubenswrapper[4874]: E0217 16:33:31.934989 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:34 crc kubenswrapper[4874]: I0217 16:33:34.038776 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-phcqn"] Feb 17 16:33:34 crc kubenswrapper[4874]: I0217 16:33:34.053968 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-phcqn"] Feb 17 16:33:34 crc kubenswrapper[4874]: I0217 16:33:34.066267 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-d820-account-create-update-fs9ms"] Feb 17 16:33:34 crc kubenswrapper[4874]: I0217 16:33:34.076937 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-d820-account-create-update-fs9ms"] Feb 17 16:33:34 crc kubenswrapper[4874]: I0217 16:33:34.472921 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dd992b7-793b-46be-a708-72097bb298cf" path="/var/lib/kubelet/pods/3dd992b7-793b-46be-a708-72097bb298cf/volumes" Feb 17 16:33:34 crc kubenswrapper[4874]: I0217 16:33:34.474542 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b30652-2359-4b06-91c4-a4a590c2fd6c" path="/var/lib/kubelet/pods/70b30652-2359-4b06-91c4-a4a590c2fd6c/volumes" Feb 17 16:33:34 crc kubenswrapper[4874]: I0217 16:33:34.887300 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:34 crc kubenswrapper[4874]: I0217 16:33:34.950516 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:35 crc kubenswrapper[4874]: E0217 16:33:35.459037 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.029124 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jlmf"] Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.029961 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9jlmf" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="registry-server" containerID="cri-o://1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616" gracePeriod=2 Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.547542 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.635504 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzv24\" (UniqueName: \"kubernetes.io/projected/ecf6036a-e287-4aca-a64b-6fc968b5c915-kube-api-access-wzv24\") pod \"ecf6036a-e287-4aca-a64b-6fc968b5c915\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.635633 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-catalog-content\") pod \"ecf6036a-e287-4aca-a64b-6fc968b5c915\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.635713 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-utilities\") pod \"ecf6036a-e287-4aca-a64b-6fc968b5c915\" (UID: \"ecf6036a-e287-4aca-a64b-6fc968b5c915\") " Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.636671 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-utilities" (OuterVolumeSpecName: "utilities") pod "ecf6036a-e287-4aca-a64b-6fc968b5c915" (UID: "ecf6036a-e287-4aca-a64b-6fc968b5c915"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.641409 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf6036a-e287-4aca-a64b-6fc968b5c915-kube-api-access-wzv24" (OuterVolumeSpecName: "kube-api-access-wzv24") pod "ecf6036a-e287-4aca-a64b-6fc968b5c915" (UID: "ecf6036a-e287-4aca-a64b-6fc968b5c915"). InnerVolumeSpecName "kube-api-access-wzv24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.686677 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecf6036a-e287-4aca-a64b-6fc968b5c915" (UID: "ecf6036a-e287-4aca-a64b-6fc968b5c915"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.732400 4874 generic.go:334] "Generic (PLEG): container finished" podID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerID="1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616" exitCode=0 Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.732451 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jlmf" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.732503 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jlmf" event={"ID":"ecf6036a-e287-4aca-a64b-6fc968b5c915","Type":"ContainerDied","Data":"1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616"} Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.732573 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jlmf" event={"ID":"ecf6036a-e287-4aca-a64b-6fc968b5c915","Type":"ContainerDied","Data":"3a7ac4d1aff08bbd1acdd014ba60648cef0256dce18052fb46fea9d2938a9e03"} Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.732603 4874 scope.go:117] "RemoveContainer" containerID="1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.739249 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzv24\" (UniqueName: \"kubernetes.io/projected/ecf6036a-e287-4aca-a64b-6fc968b5c915-kube-api-access-wzv24\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.739277 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.739291 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf6036a-e287-4aca-a64b-6fc968b5c915-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.758645 4874 scope.go:117] "RemoveContainer" containerID="b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.778203 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jlmf"] Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.788339 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9jlmf"] Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.801331 4874 scope.go:117] "RemoveContainer" containerID="dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.840213 4874 scope.go:117] "RemoveContainer" containerID="1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616" Feb 17 16:33:39 crc kubenswrapper[4874]: E0217 16:33:39.840654 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616\": container with ID starting with 1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616 not found: ID does not exist" containerID="1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.840713 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616"} err="failed to get container status \"1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616\": rpc error: code = NotFound desc = could not find container \"1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616\": container with ID starting with 1d618e2f482289661c89396a6450fd6e15964ec1886258b9b57601b293384616 not found: ID does not exist" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.840757 4874 scope.go:117] "RemoveContainer" containerID="b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e" Feb 17 16:33:39 crc kubenswrapper[4874]: E0217 16:33:39.841033 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e\": container with ID starting with b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e not found: ID does not exist" containerID="b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.841058 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e"} err="failed to get container status \"b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e\": rpc error: code = NotFound desc = could not find container \"b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e\": container with ID starting with b8524f7ffc9ac48b81c3a8530fbb7651039d87a4b33026ea226b19b8af2ef23e not found: ID does not exist" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.841083 4874 scope.go:117] "RemoveContainer" containerID="dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2" Feb 17 16:33:39 crc kubenswrapper[4874]: E0217 16:33:39.841407 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2\": container with ID starting with dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2 not found: ID does not exist" containerID="dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2" Feb 17 16:33:39 crc kubenswrapper[4874]: I0217 16:33:39.841436 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2"} err="failed to get container status \"dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2\": rpc error: code = NotFound desc = could not find container \"dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2\": container with ID starting with dedb3f4a065ecc44351d6d1c3ca744d0b7f5e65517c508d9199032da1dcf37c2 not found: ID does not exist" Feb 17 16:33:40 crc kubenswrapper[4874]: I0217 16:33:40.475450 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" path="/var/lib/kubelet/pods/ecf6036a-e287-4aca-a64b-6fc968b5c915/volumes" Feb 17 16:33:42 crc kubenswrapper[4874]: E0217 16:33:42.210282 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:42 crc kubenswrapper[4874]: E0217 16:33:42.460849 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:33:43 crc kubenswrapper[4874]: I0217 16:33:43.456905 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:33:43 crc kubenswrapper[4874]: E0217 16:33:43.457455 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:33:43 crc kubenswrapper[4874]: E0217 16:33:43.560802 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:46 crc kubenswrapper[4874]: I0217 16:33:46.390200 4874 scope.go:117] "RemoveContainer" containerID="2b396139b3ab54668f220ada16a5b77714915b0727a2f9b6278e943319aa416d" Feb 17 16:33:46 crc kubenswrapper[4874]: I0217 16:33:46.424512 4874 scope.go:117] "RemoveContainer" containerID="df7ebb90d0e00ce7adbcaebfe1d386698aca1cd2fa452d572c0f8e3a98afc8b9" Feb 17 16:33:46 crc kubenswrapper[4874]: I0217 16:33:46.448874 4874 scope.go:117] "RemoveContainer" containerID="f80e72c1edae306c4e2bab265d3dc1e5d36967b7a7e5dfbff444f82f8e2e532d" Feb 17 16:33:46 crc kubenswrapper[4874]: I0217 16:33:46.883274 4874 scope.go:117] "RemoveContainer" containerID="8d251b96f1f886aaf1ed2fdb94540a43a18e27a9d3d3d099d2633b5a18af12bd" Feb 17 16:33:46 crc kubenswrapper[4874]: I0217 16:33:46.924997 4874 scope.go:117] "RemoveContainer" containerID="84ad71ec35dbad08e18144c57a199d66fd5d9782db30b48d4bd139abf332c2e8" Feb 17 16:33:46 crc kubenswrapper[4874]: I0217 16:33:46.978615 4874 scope.go:117] "RemoveContainer" containerID="586ac36c6fecdd78309669feea2a9977e5bd8b2545742b335015b05fd55c2743" Feb 17 16:33:47 crc kubenswrapper[4874]: I0217 16:33:47.057507 4874 scope.go:117] "RemoveContainer" containerID="1e5bcd2d33916dc7d516910e47eff9c4ab0178227686a16e0ba8ec88827f5fbc" Feb 17 16:33:47 crc kubenswrapper[4874]: I0217 16:33:47.130549 4874 scope.go:117] "RemoveContainer" containerID="907adfd242e9bbfd980d49bc6f8323b6b804ab738b49cb86f0c0b7d937b107b2" Feb 17 16:33:47 crc kubenswrapper[4874]: I0217 16:33:47.177418 4874 scope.go:117] "RemoveContainer" containerID="5a8bfe433579d3f0d0e88fcad8e9d7a93f609a885bc31f0679bb59acd5c732f1" Feb 17 16:33:47 crc kubenswrapper[4874]: I0217 16:33:47.213839 4874 scope.go:117] "RemoveContainer" containerID="d845dbc1acffaa487d301a4d9ae1f43fd15907e97218ee99d09ac2f04e4560ce" Feb 17 16:33:47 crc kubenswrapper[4874]: I0217 16:33:47.250350 4874 scope.go:117] "RemoveContainer" containerID="92f8768559c7b71c15ef74c94877335312ccf1bb1da6c22b7ddc22eecd222604" Feb 17 16:33:48 crc kubenswrapper[4874]: E0217 16:33:48.229864 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:48 crc kubenswrapper[4874]: E0217 16:33:48.230022 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:49 crc kubenswrapper[4874]: I0217 16:33:49.047917 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tvbr5"] Feb 17 16:33:49 crc kubenswrapper[4874]: I0217 16:33:49.064943 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tvbr5"] Feb 17 16:33:50 crc kubenswrapper[4874]: E0217 16:33:50.484172 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:33:50 crc kubenswrapper[4874]: I0217 16:33:50.500930 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b971b7-2d31-4e4e-a182-234689e298be" path="/var/lib/kubelet/pods/e6b971b7-2d31-4e4e-a182-234689e298be/volumes" Feb 17 16:33:52 crc kubenswrapper[4874]: E0217 16:33:52.393240 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:56 crc kubenswrapper[4874]: I0217 16:33:56.031906 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-x2mrg"] Feb 17 16:33:56 crc kubenswrapper[4874]: I0217 16:33:56.049497 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-x2mrg"] Feb 17 16:33:56 crc kubenswrapper[4874]: E0217 16:33:56.460278 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:33:56 crc kubenswrapper[4874]: I0217 16:33:56.482219 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c4fb02-268b-4640-9a46-1f107a1fcc28" path="/var/lib/kubelet/pods/f6c4fb02-268b-4640-9a46-1f107a1fcc28/volumes" Feb 17 16:33:57 crc kubenswrapper[4874]: I0217 16:33:57.457308 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:33:57 crc kubenswrapper[4874]: E0217 16:33:57.457676 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:33:58 crc kubenswrapper[4874]: E0217 16:33:58.890537 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:33:59 crc kubenswrapper[4874]: I0217 16:33:59.079836 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-e1c7-account-create-update-cfrvb"] Feb 17 16:33:59 crc kubenswrapper[4874]: I0217 16:33:59.108924 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-e1c7-account-create-update-cfrvb"] Feb 17 16:33:59 crc kubenswrapper[4874]: I0217 16:33:59.122725 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-14b5-account-create-update-jtbph"] Feb 17 16:33:59 crc kubenswrapper[4874]: I0217 16:33:59.135101 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fh6cg"] Feb 17 16:33:59 crc kubenswrapper[4874]: I0217 16:33:59.148548 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-14b5-account-create-update-jtbph"] Feb 17 16:33:59 crc kubenswrapper[4874]: I0217 16:33:59.160519 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fh6cg"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.050184 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d557-account-create-update-jvmth"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.074548 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-56cvq"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.087293 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b55-account-create-update-cs68x"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.099174 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-r6kzp"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.112826 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d557-account-create-update-jvmth"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.126219 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-56cvq"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.138829 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-r6kzp"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.150230 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b55-account-create-update-cs68x"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.159664 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bj96s"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.169866 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bj96s"] Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.489722 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f331e0-01bd-4693-a5fd-46739a5ddec4" path="/var/lib/kubelet/pods/29f331e0-01bd-4693-a5fd-46739a5ddec4/volumes" Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.491648 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c21838-f8c0-4d47-8ccf-a92ff6452532" path="/var/lib/kubelet/pods/34c21838-f8c0-4d47-8ccf-a92ff6452532/volumes" Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.492992 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a9b479f-3960-4878-a2a9-48ac751b4149" path="/var/lib/kubelet/pods/3a9b479f-3960-4878-a2a9-48ac751b4149/volumes" Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.494253 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93678eb9-19c1-490b-aa7a-d07e21f6ab56" path="/var/lib/kubelet/pods/93678eb9-19c1-490b-aa7a-d07e21f6ab56/volumes" Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.503885 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b905f7a7-368c-492c-b4ad-63bcc5cd9e0f" path="/var/lib/kubelet/pods/b905f7a7-368c-492c-b4ad-63bcc5cd9e0f/volumes" Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.506397 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3aea93a-b865-4e18-bb2e-b2dc7d6f821a" path="/var/lib/kubelet/pods/c3aea93a-b865-4e18-bb2e-b2dc7d6f821a/volumes" Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.511304 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e865ad98-6d8f-4a54-9717-10028d7c52d1" path="/var/lib/kubelet/pods/e865ad98-6d8f-4a54-9717-10028d7c52d1/volumes" Feb 17 16:34:00 crc kubenswrapper[4874]: I0217 16:34:00.514572 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb3d3d3a-23a3-420e-9651-edf451bc3606" path="/var/lib/kubelet/pods/fb3d3d3a-23a3-420e-9651-edf451bc3606/volumes" Feb 17 16:34:02 crc kubenswrapper[4874]: E0217 16:34:02.444134 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996859f2_a1cc_42e5_9ea0_45f26ae8fde3.slice/crio-conmon-091afd37eef63c39f0f63fd12bd8e9fd9b5ed18a073edfd3ff9685dcf3bf94f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:34:02 crc kubenswrapper[4874]: E0217 16:34:02.460355 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:34:08 crc kubenswrapper[4874]: I0217 16:34:08.046850 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-7btxx"] Feb 17 16:34:08 crc kubenswrapper[4874]: I0217 16:34:08.064070 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-7btxx"] Feb 17 16:34:08 crc kubenswrapper[4874]: I0217 16:34:08.475837 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f01982-4445-4662-998f-bc618d020727" path="/var/lib/kubelet/pods/41f01982-4445-4662-998f-bc618d020727/volumes" Feb 17 16:34:11 crc kubenswrapper[4874]: E0217 16:34:11.574799 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:34:11 crc kubenswrapper[4874]: E0217 16:34:11.575249 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:34:11 crc kubenswrapper[4874]: E0217 16:34:11.575429 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:34:11 crc kubenswrapper[4874]: E0217 16:34:11.576694 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:34:12 crc kubenswrapper[4874]: I0217 16:34:12.457428 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:34:12 crc kubenswrapper[4874]: E0217 16:34:12.458395 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:34:13 crc kubenswrapper[4874]: E0217 16:34:13.574219 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:34:13 crc kubenswrapper[4874]: E0217 16:34:13.574684 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:34:13 crc kubenswrapper[4874]: E0217 16:34:13.574864 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:34:13 crc kubenswrapper[4874]: E0217 16:34:13.578757 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:34:23 crc kubenswrapper[4874]: E0217 16:34:23.464369 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:34:24 crc kubenswrapper[4874]: E0217 16:34:24.459173 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:34:27 crc kubenswrapper[4874]: I0217 16:34:27.458441 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:34:27 crc kubenswrapper[4874]: E0217 16:34:27.459243 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:34:36 crc kubenswrapper[4874]: E0217 16:34:36.461479 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:34:39 crc kubenswrapper[4874]: E0217 16:34:39.460852 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:34:40 crc kubenswrapper[4874]: I0217 16:34:40.458652 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:34:40 crc kubenswrapper[4874]: E0217 16:34:40.459745 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:34:46 crc kubenswrapper[4874]: I0217 16:34:46.066681 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6cx5g"] Feb 17 16:34:46 crc kubenswrapper[4874]: I0217 16:34:46.078239 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6cx5g"] Feb 17 16:34:46 crc kubenswrapper[4874]: I0217 16:34:46.486129 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676bf17d-3f3b-4159-97c3-7c1c51147145" path="/var/lib/kubelet/pods/676bf17d-3f3b-4159-97c3-7c1c51147145/volumes" Feb 17 16:34:47 crc kubenswrapper[4874]: I0217 16:34:47.035455 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lw7kx"] Feb 17 16:34:47 crc kubenswrapper[4874]: I0217 16:34:47.054252 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lw7kx"] Feb 17 16:34:47 crc kubenswrapper[4874]: I0217 16:34:47.600983 4874 scope.go:117] "RemoveContainer" containerID="f1511a4ba781ec257297c2fbeea1dd97e16fe146b82d2faff3032f5d95b52404" Feb 17 16:34:47 crc kubenswrapper[4874]: I0217 16:34:47.655377 4874 scope.go:117] "RemoveContainer" containerID="17b43308311481772c20c61e83a2736d87ea46cfacdaa18ebbf58e0b5a23218e" Feb 17 16:34:47 crc kubenswrapper[4874]: I0217 16:34:47.744093 4874 scope.go:117] "RemoveContainer" containerID="1917ca2196cf0e8476ed23b9bba6843ad2d8da44748f5cedec22db4473e5654f" Feb 17 16:34:47 crc kubenswrapper[4874]: I0217 16:34:47.859667 4874 scope.go:117] "RemoveContainer" containerID="689c6f4fd9ab93adcabc60f6a2b1efa52bb20cdf1ff62d1be4b68ef6f7d1475c" Feb 17 16:34:47 crc kubenswrapper[4874]: I0217 16:34:47.905672 4874 scope.go:117] "RemoveContainer" containerID="4c2040c6b244deec2658e95f8f85e90e0344382d6b1eae43640b8938f1c5eab8" Feb 17 16:34:47 crc kubenswrapper[4874]: I0217 16:34:47.960391 4874 scope.go:117] "RemoveContainer" containerID="7899d237ac79dd5a613b7055e0349da2bd9a162acfbe2d11c2c8c12edf2269cb" Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.015546 4874 scope.go:117] "RemoveContainer" containerID="b0a0156a365410cd63c6f6ea16b8379646bc3f033b4071892f494b483aa91561" Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.036846 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-pfkph"] Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.056501 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-pfkph"] Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.062856 4874 scope.go:117] "RemoveContainer" containerID="61541c158ef7d75ca0933a1011194896c8d0eed21970ba7c1b398fff00006066" Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.094923 4874 scope.go:117] "RemoveContainer" containerID="691c025d1656ce48567e0847b1656a4d20a447d9cec5982f62204414ebe636e0" Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.130686 4874 scope.go:117] "RemoveContainer" containerID="5de00b7c15cf252659e12fd6c7b7320c95cf306f66e8964ceeaac586532f0f2e" Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.182204 4874 scope.go:117] "RemoveContainer" containerID="898f1f2c7242a9af21f78cfd0468ecf2ce6fe1b41559458f2fc9ef20b03e288e" Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.217454 4874 scope.go:117] "RemoveContainer" containerID="2282a4a9f89d1222ee90414d0b671bff62bbab255316d343665fdf3fb2a6a534" Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.490528 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46bb9425-0e75-4b58-b0f7-f7ad6998255b" path="/var/lib/kubelet/pods/46bb9425-0e75-4b58-b0f7-f7ad6998255b/volumes" Feb 17 16:34:48 crc kubenswrapper[4874]: I0217 16:34:48.493420 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf0f49a-5960-41a8-b699-8fb05241ee31" path="/var/lib/kubelet/pods/dcf0f49a-5960-41a8-b699-8fb05241ee31/volumes" Feb 17 16:34:51 crc kubenswrapper[4874]: E0217 16:34:51.461640 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:34:52 crc kubenswrapper[4874]: E0217 16:34:52.461637 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:34:54 crc kubenswrapper[4874]: I0217 16:34:54.458064 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:34:54 crc kubenswrapper[4874]: E0217 16:34:54.458763 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:35:02 crc kubenswrapper[4874]: I0217 16:35:02.033477 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dhtc8"] Feb 17 16:35:02 crc kubenswrapper[4874]: I0217 16:35:02.046246 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dhtc8"] Feb 17 16:35:02 crc kubenswrapper[4874]: I0217 16:35:02.474977 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a96348-a1c6-4470-ad3a-d87cc20c8d3c" path="/var/lib/kubelet/pods/a4a96348-a1c6-4470-ad3a-d87cc20c8d3c/volumes" Feb 17 16:35:06 crc kubenswrapper[4874]: E0217 16:35:06.463345 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:35:07 crc kubenswrapper[4874]: E0217 16:35:07.460419 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:35:08 crc kubenswrapper[4874]: I0217 16:35:08.055296 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-jrg8w"] Feb 17 16:35:08 crc kubenswrapper[4874]: I0217 16:35:08.070801 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-jrg8w"] Feb 17 16:35:08 crc kubenswrapper[4874]: I0217 16:35:08.483238 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10d748cd-cbae-4113-bfed-39c4511a879f" path="/var/lib/kubelet/pods/10d748cd-cbae-4113-bfed-39c4511a879f/volumes" Feb 17 16:35:09 crc kubenswrapper[4874]: I0217 16:35:09.458650 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:35:09 crc kubenswrapper[4874]: E0217 16:35:09.459319 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:35:20 crc kubenswrapper[4874]: E0217 16:35:20.480407 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:35:21 crc kubenswrapper[4874]: I0217 16:35:21.458231 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:35:21 crc kubenswrapper[4874]: E0217 16:35:21.459140 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:35:21 crc kubenswrapper[4874]: E0217 16:35:21.459272 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:35:32 crc kubenswrapper[4874]: E0217 16:35:32.483263 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:35:36 crc kubenswrapper[4874]: I0217 16:35:36.459438 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:35:36 crc kubenswrapper[4874]: E0217 16:35:36.460396 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:35:36 crc kubenswrapper[4874]: E0217 16:35:36.460863 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:35:46 crc kubenswrapper[4874]: E0217 16:35:46.460804 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:35:48 crc kubenswrapper[4874]: I0217 16:35:48.564895 4874 scope.go:117] "RemoveContainer" containerID="fed82e020d9b641e58a9873a2d5a5407cabee53064d987fa9cac6d8298d4b1da" Feb 17 16:35:48 crc kubenswrapper[4874]: I0217 16:35:48.606176 4874 scope.go:117] "RemoveContainer" containerID="a51a617b00329d7632af1289ae0608922aa9ce80851c1045e700819354462d77" Feb 17 16:35:48 crc kubenswrapper[4874]: I0217 16:35:48.661798 4874 scope.go:117] "RemoveContainer" containerID="77cdcf6bdc0227dfe7b19a34bfd72fddf68434979061a126395ac7d9c23d3534" Feb 17 16:35:48 crc kubenswrapper[4874]: I0217 16:35:48.705402 4874 scope.go:117] "RemoveContainer" containerID="89218ccd87f4019be0a58e5d6563f00d652f6e0d1558057eb3af797423104580" Feb 17 16:35:49 crc kubenswrapper[4874]: I0217 16:35:49.063531 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jftpf"] Feb 17 16:35:49 crc kubenswrapper[4874]: I0217 16:35:49.090674 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-h74j4"] Feb 17 16:35:49 crc kubenswrapper[4874]: I0217 16:35:49.097119 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0107-account-create-update-rhzxh"] Feb 17 16:35:49 crc kubenswrapper[4874]: I0217 16:35:49.113780 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-h74j4"] Feb 17 16:35:49 crc kubenswrapper[4874]: I0217 16:35:49.127988 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0107-account-create-update-rhzxh"] Feb 17 16:35:49 crc kubenswrapper[4874]: I0217 16:35:49.141590 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jftpf"] Feb 17 16:35:50 crc kubenswrapper[4874]: E0217 16:35:50.473247 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:35:50 crc kubenswrapper[4874]: I0217 16:35:50.487638 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d55ab4e-9dab-4fad-8eb6-d2685a59f417" path="/var/lib/kubelet/pods/2d55ab4e-9dab-4fad-8eb6-d2685a59f417/volumes" Feb 17 16:35:50 crc kubenswrapper[4874]: I0217 16:35:50.490072 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a309e0fa-75e0-4d58-92cc-09a4dbf446d4" path="/var/lib/kubelet/pods/a309e0fa-75e0-4d58-92cc-09a4dbf446d4/volumes" Feb 17 16:35:50 crc kubenswrapper[4874]: I0217 16:35:50.491314 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca34a46-69f7-4e13-8392-04acc4ea650e" path="/var/lib/kubelet/pods/eca34a46-69f7-4e13-8392-04acc4ea650e/volumes" Feb 17 16:35:51 crc kubenswrapper[4874]: I0217 16:35:51.044235 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-s2gpr"] Feb 17 16:35:51 crc kubenswrapper[4874]: I0217 16:35:51.060482 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b434-account-create-update-zrmkx"] Feb 17 16:35:51 crc kubenswrapper[4874]: I0217 16:35:51.071932 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fee4-account-create-update-n9l5c"] Feb 17 16:35:51 crc kubenswrapper[4874]: I0217 16:35:51.082592 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b434-account-create-update-zrmkx"] Feb 17 16:35:51 crc kubenswrapper[4874]: I0217 16:35:51.092843 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-fee4-account-create-update-n9l5c"] Feb 17 16:35:51 crc kubenswrapper[4874]: I0217 16:35:51.102377 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-s2gpr"] Feb 17 16:35:51 crc kubenswrapper[4874]: I0217 16:35:51.457618 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:35:51 crc kubenswrapper[4874]: E0217 16:35:51.458057 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:35:52 crc kubenswrapper[4874]: I0217 16:35:52.485001 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cadec02-ee87-4bed-a039-d46a59f7e25f" path="/var/lib/kubelet/pods/2cadec02-ee87-4bed-a039-d46a59f7e25f/volumes" Feb 17 16:35:52 crc kubenswrapper[4874]: I0217 16:35:52.488683 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="369b8b1e-f1a3-423d-ac03-03855b2ec5d1" path="/var/lib/kubelet/pods/369b8b1e-f1a3-423d-ac03-03855b2ec5d1/volumes" Feb 17 16:35:52 crc kubenswrapper[4874]: I0217 16:35:52.490860 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204" path="/var/lib/kubelet/pods/8a0d22ee-e6f1-4bf5-ae39-70bcc8c62204/volumes" Feb 17 16:35:59 crc kubenswrapper[4874]: E0217 16:35:59.460446 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:36:03 crc kubenswrapper[4874]: I0217 16:36:03.458068 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:36:04 crc kubenswrapper[4874]: I0217 16:36:04.677115 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"8814fc835f2282d6d41589f1cebfc57dc4e1d7a7e758d1b73c0eca3955883438"} Feb 17 16:36:05 crc kubenswrapper[4874]: E0217 16:36:05.459929 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:36:14 crc kubenswrapper[4874]: E0217 16:36:14.464168 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:36:19 crc kubenswrapper[4874]: E0217 16:36:19.895653 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:36:26 crc kubenswrapper[4874]: I0217 16:36:26.058401 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ml2rb"] Feb 17 16:36:26 crc kubenswrapper[4874]: I0217 16:36:26.072273 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ml2rb"] Feb 17 16:36:26 crc kubenswrapper[4874]: E0217 16:36:26.470576 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:36:26 crc kubenswrapper[4874]: I0217 16:36:26.489027 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4327f121-2ddc-4367-9055-17c7fe4d855e" path="/var/lib/kubelet/pods/4327f121-2ddc-4367-9055-17c7fe4d855e/volumes" Feb 17 16:36:34 crc kubenswrapper[4874]: E0217 16:36:34.465328 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:36:38 crc kubenswrapper[4874]: E0217 16:36:38.464834 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:36:48 crc kubenswrapper[4874]: E0217 16:36:48.463720 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:36:48 crc kubenswrapper[4874]: I0217 16:36:48.886543 4874 scope.go:117] "RemoveContainer" containerID="4abe6950ede63a1b516707f61db758721b74e901fcf53508675e7f5c73f6a4c3" Feb 17 16:36:48 crc kubenswrapper[4874]: I0217 16:36:48.944912 4874 scope.go:117] "RemoveContainer" containerID="a34fc49aca537a851dcdc1129957abada85f68fa99191e7c29eac8b69996dd15" Feb 17 16:36:49 crc kubenswrapper[4874]: I0217 16:36:49.002702 4874 scope.go:117] "RemoveContainer" containerID="3ba4101d65ab301c3f3a66a2850065fbb946cb24ba7f848786c185c65d6e5e46" Feb 17 16:36:49 crc kubenswrapper[4874]: I0217 16:36:49.055251 4874 scope.go:117] "RemoveContainer" containerID="c631b19a947ff54ab119434427ae2f287f61f94f7785ee3ed680e0357a464f44" Feb 17 16:36:49 crc kubenswrapper[4874]: I0217 16:36:49.116553 4874 scope.go:117] "RemoveContainer" containerID="4370a27b806f2182276b9524d64e4cb3d96a7e9a9aaa747b643df6d603511494" Feb 17 16:36:49 crc kubenswrapper[4874]: I0217 16:36:49.156464 4874 scope.go:117] "RemoveContainer" containerID="911faef16a68b6fdb6fbbdaabc2a85ba5b50eb3f5213a3265919fbc67d897aa2" Feb 17 16:36:49 crc kubenswrapper[4874]: I0217 16:36:49.211722 4874 scope.go:117] "RemoveContainer" containerID="5d605fb9a1b09282e996a591016433a7a47a39f0390c461aa50d7c8fe1952ab4" Feb 17 16:36:51 crc kubenswrapper[4874]: E0217 16:36:51.460988 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:36:53 crc kubenswrapper[4874]: I0217 16:36:53.061870 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-78mdm"] Feb 17 16:36:53 crc kubenswrapper[4874]: I0217 16:36:53.080257 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-78mdm"] Feb 17 16:36:53 crc kubenswrapper[4874]: I0217 16:36:53.096280 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lnpsd"] Feb 17 16:36:53 crc kubenswrapper[4874]: I0217 16:36:53.109441 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lnpsd"] Feb 17 16:36:54 crc kubenswrapper[4874]: I0217 16:36:54.047731 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-383e-account-create-update-f4p7m"] Feb 17 16:36:54 crc kubenswrapper[4874]: I0217 16:36:54.064348 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-2wzww"] Feb 17 16:36:54 crc kubenswrapper[4874]: I0217 16:36:54.081904 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-383e-account-create-update-f4p7m"] Feb 17 16:36:54 crc kubenswrapper[4874]: I0217 16:36:54.103228 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-2wzww"] Feb 17 16:36:54 crc kubenswrapper[4874]: I0217 16:36:54.486189 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181fc32a-cc08-4e8c-8f05-b532e505f0df" path="/var/lib/kubelet/pods/181fc32a-cc08-4e8c-8f05-b532e505f0df/volumes" Feb 17 16:36:54 crc kubenswrapper[4874]: I0217 16:36:54.520835 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20df2a95-c9b4-4cee-95a5-9a7481aed963" path="/var/lib/kubelet/pods/20df2a95-c9b4-4cee-95a5-9a7481aed963/volumes" Feb 17 16:36:54 crc kubenswrapper[4874]: I0217 16:36:54.523782 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d95d6d-ef3c-4154-a40d-5bee661b7d56" path="/var/lib/kubelet/pods/74d95d6d-ef3c-4154-a40d-5bee661b7d56/volumes" Feb 17 16:36:54 crc kubenswrapper[4874]: I0217 16:36:54.526552 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="820dffc3-fb0f-4dd2-b9bc-a680d02a84d9" path="/var/lib/kubelet/pods/820dffc3-fb0f-4dd2-b9bc-a680d02a84d9/volumes" Feb 17 16:37:01 crc kubenswrapper[4874]: E0217 16:37:01.462567 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:37:04 crc kubenswrapper[4874]: E0217 16:37:04.461175 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:37:12 crc kubenswrapper[4874]: E0217 16:37:12.462368 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:37:14 crc kubenswrapper[4874]: I0217 16:37:14.049188 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-dnmbf"] Feb 17 16:37:14 crc kubenswrapper[4874]: I0217 16:37:14.064348 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-dnmbf"] Feb 17 16:37:14 crc kubenswrapper[4874]: I0217 16:37:14.485390 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e293c523-929f-4d2e-bf96-091cbed7f12b" path="/var/lib/kubelet/pods/e293c523-929f-4d2e-bf96-091cbed7f12b/volumes" Feb 17 16:37:18 crc kubenswrapper[4874]: E0217 16:37:18.460196 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:37:26 crc kubenswrapper[4874]: E0217 16:37:26.461299 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:37:32 crc kubenswrapper[4874]: E0217 16:37:32.460411 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:37:37 crc kubenswrapper[4874]: I0217 16:37:37.036009 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8872n"] Feb 17 16:37:37 crc kubenswrapper[4874]: I0217 16:37:37.051861 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8872n"] Feb 17 16:37:38 crc kubenswrapper[4874]: I0217 16:37:38.477670 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="677d7b63-59f1-4829-9478-f59253741cbc" path="/var/lib/kubelet/pods/677d7b63-59f1-4829-9478-f59253741cbc/volumes" Feb 17 16:37:39 crc kubenswrapper[4874]: E0217 16:37:39.460617 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:37:44 crc kubenswrapper[4874]: E0217 16:37:44.460686 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:37:49 crc kubenswrapper[4874]: I0217 16:37:49.428170 4874 scope.go:117] "RemoveContainer" containerID="a96475316bb7916bb2330be95cc6d84b0388043f9f00d2e09e62a634b207e9f8" Feb 17 16:37:49 crc kubenswrapper[4874]: I0217 16:37:49.482013 4874 scope.go:117] "RemoveContainer" containerID="bdb53e0a5adb7c4624709ae418e3349df97de66d9507cf6ea08e45046bb785e0" Feb 17 16:37:49 crc kubenswrapper[4874]: I0217 16:37:49.594772 4874 scope.go:117] "RemoveContainer" containerID="500bcb02302837a39c1f56bacbc15e09e11785af7b0b611384cca00f2bc6ea82" Feb 17 16:37:49 crc kubenswrapper[4874]: I0217 16:37:49.644334 4874 scope.go:117] "RemoveContainer" containerID="68fa4eb6c5eab571b0b55fe728595aaa047aaf4964c0ebf1a014255cd9bbc17a" Feb 17 16:37:49 crc kubenswrapper[4874]: I0217 16:37:49.716423 4874 scope.go:117] "RemoveContainer" containerID="4febb0c9c517a213914dfb27dd0c6bc087f3a254c5aeb2ed1ffcf741ab199284" Feb 17 16:37:49 crc kubenswrapper[4874]: I0217 16:37:49.793504 4874 scope.go:117] "RemoveContainer" containerID="ecd3d808bcf9c54fbf8c3b38c1e22eae51f02e04d27ad9b143fc9770921f5ed8" Feb 17 16:37:54 crc kubenswrapper[4874]: E0217 16:37:54.462523 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:37:56 crc kubenswrapper[4874]: E0217 16:37:56.461099 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:38:07 crc kubenswrapper[4874]: E0217 16:38:07.461777 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:38:08 crc kubenswrapper[4874]: E0217 16:38:08.465817 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:38:22 crc kubenswrapper[4874]: E0217 16:38:22.462496 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:38:23 crc kubenswrapper[4874]: E0217 16:38:23.459504 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:38:27 crc kubenswrapper[4874]: I0217 16:38:27.725384 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:38:27 crc kubenswrapper[4874]: I0217 16:38:27.725759 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:38:34 crc kubenswrapper[4874]: E0217 16:38:34.459616 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:38:38 crc kubenswrapper[4874]: E0217 16:38:38.459728 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:38:46 crc kubenswrapper[4874]: E0217 16:38:46.462323 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:38:50 crc kubenswrapper[4874]: E0217 16:38:50.467544 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:38:57 crc kubenswrapper[4874]: E0217 16:38:57.459166 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:38:57 crc kubenswrapper[4874]: I0217 16:38:57.725147 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:38:57 crc kubenswrapper[4874]: I0217 16:38:57.725220 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:04 crc kubenswrapper[4874]: E0217 16:39:04.464723 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.405808 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fwzrv"] Feb 17 16:39:09 crc kubenswrapper[4874]: E0217 16:39:09.407162 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerName="extract-utilities" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.407183 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerName="extract-utilities" Feb 17 16:39:09 crc kubenswrapper[4874]: E0217 16:39:09.407208 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="extract-utilities" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.407218 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="extract-utilities" Feb 17 16:39:09 crc kubenswrapper[4874]: E0217 16:39:09.407237 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="extract-content" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.407246 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="extract-content" Feb 17 16:39:09 crc kubenswrapper[4874]: E0217 16:39:09.407273 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerName="registry-server" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.407286 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerName="registry-server" Feb 17 16:39:09 crc kubenswrapper[4874]: E0217 16:39:09.407303 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="registry-server" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.407314 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="registry-server" Feb 17 16:39:09 crc kubenswrapper[4874]: E0217 16:39:09.407340 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerName="extract-content" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.407349 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerName="extract-content" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.408200 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf6036a-e287-4aca-a64b-6fc968b5c915" containerName="registry-server" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.408241 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="996859f2-a1cc-42e5-9ea0-45f26ae8fde3" containerName="registry-server" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.411166 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.418554 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwzrv"] Feb 17 16:39:09 crc kubenswrapper[4874]: E0217 16:39:09.461107 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.532159 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2mxz\" (UniqueName: \"kubernetes.io/projected/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-kube-api-access-d2mxz\") pod \"redhat-marketplace-fwzrv\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.532297 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-catalog-content\") pod \"redhat-marketplace-fwzrv\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.532649 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-utilities\") pod \"redhat-marketplace-fwzrv\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.596262 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vj86r"] Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.599979 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.614433 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vj86r"] Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.635283 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2mxz\" (UniqueName: \"kubernetes.io/projected/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-kube-api-access-d2mxz\") pod \"redhat-marketplace-fwzrv\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.635408 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-catalog-content\") pod \"redhat-marketplace-fwzrv\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.635552 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-utilities\") pod \"redhat-marketplace-fwzrv\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.636175 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-utilities\") pod \"redhat-marketplace-fwzrv\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.636482 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-catalog-content\") pod \"redhat-marketplace-fwzrv\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.658354 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2mxz\" (UniqueName: \"kubernetes.io/projected/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-kube-api-access-d2mxz\") pod \"redhat-marketplace-fwzrv\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.733321 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.738278 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4cn9\" (UniqueName: \"kubernetes.io/projected/1362684f-91dd-4e6a-a880-30c10ffa7aba-kube-api-access-m4cn9\") pod \"redhat-operators-vj86r\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.738337 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-utilities\") pod \"redhat-operators-vj86r\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.738409 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-catalog-content\") pod \"redhat-operators-vj86r\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.840981 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4cn9\" (UniqueName: \"kubernetes.io/projected/1362684f-91dd-4e6a-a880-30c10ffa7aba-kube-api-access-m4cn9\") pod \"redhat-operators-vj86r\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.841306 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-utilities\") pod \"redhat-operators-vj86r\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.841521 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-catalog-content\") pod \"redhat-operators-vj86r\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.841983 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-utilities\") pod \"redhat-operators-vj86r\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.846854 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-catalog-content\") pod \"redhat-operators-vj86r\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.872776 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4cn9\" (UniqueName: \"kubernetes.io/projected/1362684f-91dd-4e6a-a880-30c10ffa7aba-kube-api-access-m4cn9\") pod \"redhat-operators-vj86r\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:09 crc kubenswrapper[4874]: I0217 16:39:09.920866 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:10 crc kubenswrapper[4874]: I0217 16:39:10.324853 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwzrv"] Feb 17 16:39:10 crc kubenswrapper[4874]: I0217 16:39:10.502125 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vj86r"] Feb 17 16:39:10 crc kubenswrapper[4874]: W0217 16:39:10.502383 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1362684f_91dd_4e6a_a880_30c10ffa7aba.slice/crio-fe6e48ba8ba762103d015ed1277d21c516374b89143e747204b9624e97e70586 WatchSource:0}: Error finding container fe6e48ba8ba762103d015ed1277d21c516374b89143e747204b9624e97e70586: Status 404 returned error can't find the container with id fe6e48ba8ba762103d015ed1277d21c516374b89143e747204b9624e97e70586 Feb 17 16:39:11 crc kubenswrapper[4874]: I0217 16:39:11.065745 4874 generic.go:334] "Generic (PLEG): container finished" podID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerID="a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520" exitCode=0 Feb 17 16:39:11 crc kubenswrapper[4874]: I0217 16:39:11.065844 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwzrv" event={"ID":"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c","Type":"ContainerDied","Data":"a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520"} Feb 17 16:39:11 crc kubenswrapper[4874]: I0217 16:39:11.066118 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwzrv" event={"ID":"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c","Type":"ContainerStarted","Data":"44e81de70d0be13f111c72462403a062b3658177afc0269092431e7051c8c900"} Feb 17 16:39:11 crc kubenswrapper[4874]: I0217 16:39:11.067450 4874 generic.go:334] "Generic (PLEG): container finished" podID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerID="aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc" exitCode=0 Feb 17 16:39:11 crc kubenswrapper[4874]: I0217 16:39:11.067488 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj86r" event={"ID":"1362684f-91dd-4e6a-a880-30c10ffa7aba","Type":"ContainerDied","Data":"aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc"} Feb 17 16:39:11 crc kubenswrapper[4874]: I0217 16:39:11.067537 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj86r" event={"ID":"1362684f-91dd-4e6a-a880-30c10ffa7aba","Type":"ContainerStarted","Data":"fe6e48ba8ba762103d015ed1277d21c516374b89143e747204b9624e97e70586"} Feb 17 16:39:11 crc kubenswrapper[4874]: I0217 16:39:11.067853 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:39:12 crc kubenswrapper[4874]: I0217 16:39:12.080506 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwzrv" event={"ID":"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c","Type":"ContainerStarted","Data":"7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950"} Feb 17 16:39:12 crc kubenswrapper[4874]: I0217 16:39:12.082327 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj86r" event={"ID":"1362684f-91dd-4e6a-a880-30c10ffa7aba","Type":"ContainerStarted","Data":"2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9"} Feb 17 16:39:14 crc kubenswrapper[4874]: I0217 16:39:14.111655 4874 generic.go:334] "Generic (PLEG): container finished" podID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerID="7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950" exitCode=0 Feb 17 16:39:14 crc kubenswrapper[4874]: I0217 16:39:14.111918 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwzrv" event={"ID":"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c","Type":"ContainerDied","Data":"7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950"} Feb 17 16:39:15 crc kubenswrapper[4874]: I0217 16:39:15.125694 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwzrv" event={"ID":"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c","Type":"ContainerStarted","Data":"13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2"} Feb 17 16:39:15 crc kubenswrapper[4874]: I0217 16:39:15.156779 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fwzrv" podStartSLOduration=2.508556755 podStartE2EDuration="6.156754985s" podCreationTimestamp="2026-02-17 16:39:09 +0000 UTC" firstStartedPulling="2026-02-17 16:39:11.067632829 +0000 UTC m=+2161.362021390" lastFinishedPulling="2026-02-17 16:39:14.715831059 +0000 UTC m=+2165.010219620" observedRunningTime="2026-02-17 16:39:15.143530568 +0000 UTC m=+2165.437919149" watchObservedRunningTime="2026-02-17 16:39:15.156754985 +0000 UTC m=+2165.451143556" Feb 17 16:39:18 crc kubenswrapper[4874]: E0217 16:39:18.611622 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:39:18 crc kubenswrapper[4874]: E0217 16:39:18.612062 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:39:18 crc kubenswrapper[4874]: E0217 16:39:18.612210 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:39:18 crc kubenswrapper[4874]: E0217 16:39:18.613850 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:39:19 crc kubenswrapper[4874]: I0217 16:39:19.161643 4874 generic.go:334] "Generic (PLEG): container finished" podID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerID="2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9" exitCode=0 Feb 17 16:39:19 crc kubenswrapper[4874]: I0217 16:39:19.161694 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj86r" event={"ID":"1362684f-91dd-4e6a-a880-30c10ffa7aba","Type":"ContainerDied","Data":"2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9"} Feb 17 16:39:19 crc kubenswrapper[4874]: I0217 16:39:19.733783 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:19 crc kubenswrapper[4874]: I0217 16:39:19.734173 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:20 crc kubenswrapper[4874]: E0217 16:39:20.605320 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:39:20 crc kubenswrapper[4874]: E0217 16:39:20.605619 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:39:20 crc kubenswrapper[4874]: E0217 16:39:20.605733 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:39:20 crc kubenswrapper[4874]: E0217 16:39:20.606934 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:39:20 crc kubenswrapper[4874]: I0217 16:39:20.785397 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fwzrv" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerName="registry-server" probeResult="failure" output=< Feb 17 16:39:20 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:39:20 crc kubenswrapper[4874]: > Feb 17 16:39:24 crc kubenswrapper[4874]: I0217 16:39:24.213426 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj86r" event={"ID":"1362684f-91dd-4e6a-a880-30c10ffa7aba","Type":"ContainerStarted","Data":"2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65"} Feb 17 16:39:24 crc kubenswrapper[4874]: I0217 16:39:24.237415 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vj86r" podStartSLOduration=2.447307109 podStartE2EDuration="15.237393827s" podCreationTimestamp="2026-02-17 16:39:09 +0000 UTC" firstStartedPulling="2026-02-17 16:39:11.069348492 +0000 UTC m=+2161.363737053" lastFinishedPulling="2026-02-17 16:39:23.85943521 +0000 UTC m=+2174.153823771" observedRunningTime="2026-02-17 16:39:24.232547467 +0000 UTC m=+2174.526936028" watchObservedRunningTime="2026-02-17 16:39:24.237393827 +0000 UTC m=+2174.531782398" Feb 17 16:39:27 crc kubenswrapper[4874]: I0217 16:39:27.724350 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:27 crc kubenswrapper[4874]: I0217 16:39:27.724957 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:27 crc kubenswrapper[4874]: I0217 16:39:27.725011 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:39:27 crc kubenswrapper[4874]: I0217 16:39:27.726193 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8814fc835f2282d6d41589f1cebfc57dc4e1d7a7e758d1b73c0eca3955883438"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:39:27 crc kubenswrapper[4874]: I0217 16:39:27.726274 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://8814fc835f2282d6d41589f1cebfc57dc4e1d7a7e758d1b73c0eca3955883438" gracePeriod=600 Feb 17 16:39:28 crc kubenswrapper[4874]: I0217 16:39:28.256472 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="8814fc835f2282d6d41589f1cebfc57dc4e1d7a7e758d1b73c0eca3955883438" exitCode=0 Feb 17 16:39:28 crc kubenswrapper[4874]: I0217 16:39:28.256547 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"8814fc835f2282d6d41589f1cebfc57dc4e1d7a7e758d1b73c0eca3955883438"} Feb 17 16:39:28 crc kubenswrapper[4874]: I0217 16:39:28.256788 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3"} Feb 17 16:39:28 crc kubenswrapper[4874]: I0217 16:39:28.256812 4874 scope.go:117] "RemoveContainer" containerID="ae76b5b70958fa4d01ecf513ee19ba4d16d886491645f525975afe4bbcc5438e" Feb 17 16:39:29 crc kubenswrapper[4874]: I0217 16:39:29.810415 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:29 crc kubenswrapper[4874]: I0217 16:39:29.859645 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:29 crc kubenswrapper[4874]: I0217 16:39:29.921806 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:29 crc kubenswrapper[4874]: I0217 16:39:29.922163 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:30 crc kubenswrapper[4874]: I0217 16:39:30.056211 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwzrv"] Feb 17 16:39:31 crc kubenswrapper[4874]: I0217 16:39:31.005711 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vj86r" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="registry-server" probeResult="failure" output=< Feb 17 16:39:31 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:39:31 crc kubenswrapper[4874]: > Feb 17 16:39:31 crc kubenswrapper[4874]: I0217 16:39:31.292027 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fwzrv" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerName="registry-server" containerID="cri-o://13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2" gracePeriod=2 Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.080341 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.207951 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-catalog-content\") pod \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.208040 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2mxz\" (UniqueName: \"kubernetes.io/projected/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-kube-api-access-d2mxz\") pod \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.208256 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-utilities\") pod \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\" (UID: \"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c\") " Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.209336 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-utilities" (OuterVolumeSpecName: "utilities") pod "14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" (UID: "14cd0ad2-1615-4ecb-860e-0c1fcd205f1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.217337 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-kube-api-access-d2mxz" (OuterVolumeSpecName: "kube-api-access-d2mxz") pod "14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" (UID: "14cd0ad2-1615-4ecb-860e-0c1fcd205f1c"). InnerVolumeSpecName "kube-api-access-d2mxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.248352 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" (UID: "14cd0ad2-1615-4ecb-860e-0c1fcd205f1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.310415 4874 generic.go:334] "Generic (PLEG): container finished" podID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerID="13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2" exitCode=0 Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.310458 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwzrv" event={"ID":"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c","Type":"ContainerDied","Data":"13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2"} Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.310488 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fwzrv" event={"ID":"14cd0ad2-1615-4ecb-860e-0c1fcd205f1c","Type":"ContainerDied","Data":"44e81de70d0be13f111c72462403a062b3658177afc0269092431e7051c8c900"} Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.310505 4874 scope.go:117] "RemoveContainer" containerID="13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.310519 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fwzrv" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.313252 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.313517 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.313531 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2mxz\" (UniqueName: \"kubernetes.io/projected/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c-kube-api-access-d2mxz\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.349907 4874 scope.go:117] "RemoveContainer" containerID="7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.363901 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwzrv"] Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.376655 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fwzrv"] Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.377465 4874 scope.go:117] "RemoveContainer" containerID="a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.443230 4874 scope.go:117] "RemoveContainer" containerID="13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2" Feb 17 16:39:32 crc kubenswrapper[4874]: E0217 16:39:32.443716 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2\": container with ID starting with 13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2 not found: ID does not exist" containerID="13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.443769 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2"} err="failed to get container status \"13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2\": rpc error: code = NotFound desc = could not find container \"13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2\": container with ID starting with 13c184085a66384b19797793986f42433c34ab22f534da40ab1bfb9c16dabfe2 not found: ID does not exist" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.443793 4874 scope.go:117] "RemoveContainer" containerID="7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950" Feb 17 16:39:32 crc kubenswrapper[4874]: E0217 16:39:32.444371 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950\": container with ID starting with 7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950 not found: ID does not exist" containerID="7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.444424 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950"} err="failed to get container status \"7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950\": rpc error: code = NotFound desc = could not find container \"7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950\": container with ID starting with 7e2a83b0d5dc3c73811aa56efc76bb1d71573d2db72c154ad0f8419b3930e950 not found: ID does not exist" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.444451 4874 scope.go:117] "RemoveContainer" containerID="a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520" Feb 17 16:39:32 crc kubenswrapper[4874]: E0217 16:39:32.444777 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520\": container with ID starting with a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520 not found: ID does not exist" containerID="a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.444811 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520"} err="failed to get container status \"a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520\": rpc error: code = NotFound desc = could not find container \"a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520\": container with ID starting with a7d239ef10c227e2d7425342c84588da2160bc205fded44c831e6840102e6520 not found: ID does not exist" Feb 17 16:39:32 crc kubenswrapper[4874]: E0217 16:39:32.458749 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:39:32 crc kubenswrapper[4874]: I0217 16:39:32.470108 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" path="/var/lib/kubelet/pods/14cd0ad2-1615-4ecb-860e-0c1fcd205f1c/volumes" Feb 17 16:39:35 crc kubenswrapper[4874]: E0217 16:39:35.461326 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:39:40 crc kubenswrapper[4874]: I0217 16:39:40.992356 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vj86r" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="registry-server" probeResult="failure" output=< Feb 17 16:39:40 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:39:40 crc kubenswrapper[4874]: > Feb 17 16:39:47 crc kubenswrapper[4874]: E0217 16:39:47.461974 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:39:49 crc kubenswrapper[4874]: E0217 16:39:49.460514 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:39:50 crc kubenswrapper[4874]: I0217 16:39:50.015536 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:50 crc kubenswrapper[4874]: I0217 16:39:50.098749 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:50 crc kubenswrapper[4874]: I0217 16:39:50.265430 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vj86r"] Feb 17 16:39:51 crc kubenswrapper[4874]: I0217 16:39:51.570315 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vj86r" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="registry-server" containerID="cri-o://2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65" gracePeriod=2 Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.147742 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.294250 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-utilities\") pod \"1362684f-91dd-4e6a-a880-30c10ffa7aba\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.294847 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4cn9\" (UniqueName: \"kubernetes.io/projected/1362684f-91dd-4e6a-a880-30c10ffa7aba-kube-api-access-m4cn9\") pod \"1362684f-91dd-4e6a-a880-30c10ffa7aba\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.294879 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-catalog-content\") pod \"1362684f-91dd-4e6a-a880-30c10ffa7aba\" (UID: \"1362684f-91dd-4e6a-a880-30c10ffa7aba\") " Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.295954 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-utilities" (OuterVolumeSpecName: "utilities") pod "1362684f-91dd-4e6a-a880-30c10ffa7aba" (UID: "1362684f-91dd-4e6a-a880-30c10ffa7aba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.301000 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1362684f-91dd-4e6a-a880-30c10ffa7aba-kube-api-access-m4cn9" (OuterVolumeSpecName: "kube-api-access-m4cn9") pod "1362684f-91dd-4e6a-a880-30c10ffa7aba" (UID: "1362684f-91dd-4e6a-a880-30c10ffa7aba"). InnerVolumeSpecName "kube-api-access-m4cn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.397842 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4cn9\" (UniqueName: \"kubernetes.io/projected/1362684f-91dd-4e6a-a880-30c10ffa7aba-kube-api-access-m4cn9\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.397895 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.424062 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1362684f-91dd-4e6a-a880-30c10ffa7aba" (UID: "1362684f-91dd-4e6a-a880-30c10ffa7aba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.499973 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1362684f-91dd-4e6a-a880-30c10ffa7aba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.582256 4874 generic.go:334] "Generic (PLEG): container finished" podID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerID="2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65" exitCode=0 Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.582291 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj86r" event={"ID":"1362684f-91dd-4e6a-a880-30c10ffa7aba","Type":"ContainerDied","Data":"2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65"} Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.582340 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vj86r" event={"ID":"1362684f-91dd-4e6a-a880-30c10ffa7aba","Type":"ContainerDied","Data":"fe6e48ba8ba762103d015ed1277d21c516374b89143e747204b9624e97e70586"} Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.582358 4874 scope.go:117] "RemoveContainer" containerID="2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.582377 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vj86r" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.620438 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vj86r"] Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.625358 4874 scope.go:117] "RemoveContainer" containerID="2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.634718 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vj86r"] Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.667327 4874 scope.go:117] "RemoveContainer" containerID="aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.763653 4874 scope.go:117] "RemoveContainer" containerID="2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65" Feb 17 16:39:52 crc kubenswrapper[4874]: E0217 16:39:52.764384 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65\": container with ID starting with 2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65 not found: ID does not exist" containerID="2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.764425 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65"} err="failed to get container status \"2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65\": rpc error: code = NotFound desc = could not find container \"2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65\": container with ID starting with 2865ac86c23826285151e87c2e75eb3fc3815055241981855342be0b33566e65 not found: ID does not exist" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.764493 4874 scope.go:117] "RemoveContainer" containerID="2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9" Feb 17 16:39:52 crc kubenswrapper[4874]: E0217 16:39:52.764859 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9\": container with ID starting with 2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9 not found: ID does not exist" containerID="2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.764885 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9"} err="failed to get container status \"2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9\": rpc error: code = NotFound desc = could not find container \"2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9\": container with ID starting with 2ccc58d49dde9269871b2d560a4d8fd8549ee8ae205a90f987bfa8d4442961e9 not found: ID does not exist" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.764900 4874 scope.go:117] "RemoveContainer" containerID="aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc" Feb 17 16:39:52 crc kubenswrapper[4874]: E0217 16:39:52.765197 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc\": container with ID starting with aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc not found: ID does not exist" containerID="aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc" Feb 17 16:39:52 crc kubenswrapper[4874]: I0217 16:39:52.765224 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc"} err="failed to get container status \"aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc\": rpc error: code = NotFound desc = could not find container \"aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc\": container with ID starting with aea2418daf97e9f854299c41880d70f95d5d1d35a0a9808442d7f42d297dfffc not found: ID does not exist" Feb 17 16:39:54 crc kubenswrapper[4874]: I0217 16:39:54.475557 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" path="/var/lib/kubelet/pods/1362684f-91dd-4e6a-a880-30c10ffa7aba/volumes" Feb 17 16:39:59 crc kubenswrapper[4874]: E0217 16:39:59.460281 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:40:00 crc kubenswrapper[4874]: E0217 16:40:00.470516 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:40:12 crc kubenswrapper[4874]: E0217 16:40:12.460842 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:40:13 crc kubenswrapper[4874]: E0217 16:40:13.460922 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:40:24 crc kubenswrapper[4874]: E0217 16:40:24.460694 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:40:25 crc kubenswrapper[4874]: I0217 16:40:25.009440 4874 generic.go:334] "Generic (PLEG): container finished" podID="2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c" containerID="a0c3c013513f32280ffde5ee0ef69a1e4ac5611dd3e7d50777cff55a4bb0ff33" exitCode=2 Feb 17 16:40:25 crc kubenswrapper[4874]: I0217 16:40:25.009500 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" event={"ID":"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c","Type":"ContainerDied","Data":"a0c3c013513f32280ffde5ee0ef69a1e4ac5611dd3e7d50777cff55a4bb0ff33"} Feb 17 16:40:25 crc kubenswrapper[4874]: E0217 16:40:25.459625 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.657622 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.768748 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-inventory\") pod \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.768901 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-ssh-key-openstack-edpm-ipam\") pod \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.768984 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwhfr\" (UniqueName: \"kubernetes.io/projected/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-kube-api-access-fwhfr\") pod \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\" (UID: \"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c\") " Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.774672 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-kube-api-access-fwhfr" (OuterVolumeSpecName: "kube-api-access-fwhfr") pod "2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c" (UID: "2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c"). InnerVolumeSpecName "kube-api-access-fwhfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.801676 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-inventory" (OuterVolumeSpecName: "inventory") pod "2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c" (UID: "2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.803661 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c" (UID: "2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.871386 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.871417 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:40:26 crc kubenswrapper[4874]: I0217 16:40:26.871430 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwhfr\" (UniqueName: \"kubernetes.io/projected/2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c-kube-api-access-fwhfr\") on node \"crc\" DevicePath \"\"" Feb 17 16:40:27 crc kubenswrapper[4874]: I0217 16:40:27.039967 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" event={"ID":"2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c","Type":"ContainerDied","Data":"a9a4784f48fa660bc0cc8eead830543fe7bd0f215619e4b18ae3cc3d86c0b0ac"} Feb 17 16:40:27 crc kubenswrapper[4874]: I0217 16:40:27.040018 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9a4784f48fa660bc0cc8eead830543fe7bd0f215619e4b18ae3cc3d86c0b0ac" Feb 17 16:40:27 crc kubenswrapper[4874]: I0217 16:40:27.040045 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pfn67" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.041064 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw"] Feb 17 16:40:34 crc kubenswrapper[4874]: E0217 16:40:34.043465 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="extract-content" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043486 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="extract-content" Feb 17 16:40:34 crc kubenswrapper[4874]: E0217 16:40:34.043507 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="extract-utilities" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043515 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="extract-utilities" Feb 17 16:40:34 crc kubenswrapper[4874]: E0217 16:40:34.043531 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerName="extract-utilities" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043539 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerName="extract-utilities" Feb 17 16:40:34 crc kubenswrapper[4874]: E0217 16:40:34.043570 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="registry-server" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043577 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="registry-server" Feb 17 16:40:34 crc kubenswrapper[4874]: E0217 16:40:34.043587 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerName="extract-content" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043594 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerName="extract-content" Feb 17 16:40:34 crc kubenswrapper[4874]: E0217 16:40:34.043608 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerName="registry-server" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043618 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerName="registry-server" Feb 17 16:40:34 crc kubenswrapper[4874]: E0217 16:40:34.043641 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043649 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043914 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="1362684f-91dd-4e6a-a880-30c10ffa7aba" containerName="registry-server" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043949 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="14cd0ad2-1615-4ecb-860e-0c1fcd205f1c" containerName="registry-server" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.043967 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.045110 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.048794 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.049031 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.049339 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.049366 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dsqj4" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.062285 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw"] Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.180461 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ndw\" (UniqueName: \"kubernetes.io/projected/0d027e77-e298-4ee6-bad9-b12332cc3a81-kube-api-access-n7ndw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rspxw\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.180862 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rspxw\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.180948 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rspxw\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.282769 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rspxw\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.282847 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rspxw\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.282937 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7ndw\" (UniqueName: \"kubernetes.io/projected/0d027e77-e298-4ee6-bad9-b12332cc3a81-kube-api-access-n7ndw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rspxw\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.296987 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rspxw\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.297287 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rspxw\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.305850 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7ndw\" (UniqueName: \"kubernetes.io/projected/0d027e77-e298-4ee6-bad9-b12332cc3a81-kube-api-access-n7ndw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rspxw\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.378145 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:40:34 crc kubenswrapper[4874]: I0217 16:40:34.812821 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw"] Feb 17 16:40:35 crc kubenswrapper[4874]: I0217 16:40:35.124285 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" event={"ID":"0d027e77-e298-4ee6-bad9-b12332cc3a81","Type":"ContainerStarted","Data":"d1a3ab177cb8125ecf4e8b96c8829c58f4189e88e881e8052b42a1f7ddffc580"} Feb 17 16:40:36 crc kubenswrapper[4874]: I0217 16:40:36.142439 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" event={"ID":"0d027e77-e298-4ee6-bad9-b12332cc3a81","Type":"ContainerStarted","Data":"a37dca808fc52f4b680140c6e4d572e61425608ff1cfe99d8f81df5486f9607e"} Feb 17 16:40:36 crc kubenswrapper[4874]: E0217 16:40:36.459580 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:40:39 crc kubenswrapper[4874]: E0217 16:40:39.460911 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:40:47 crc kubenswrapper[4874]: E0217 16:40:47.459602 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:40:51 crc kubenswrapper[4874]: E0217 16:40:51.459252 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:41:01 crc kubenswrapper[4874]: E0217 16:41:01.460677 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:41:02 crc kubenswrapper[4874]: E0217 16:41:02.478929 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:41:12 crc kubenswrapper[4874]: E0217 16:41:12.459221 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:41:14 crc kubenswrapper[4874]: E0217 16:41:14.461364 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:41:25 crc kubenswrapper[4874]: E0217 16:41:25.460122 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:41:25 crc kubenswrapper[4874]: E0217 16:41:25.460150 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:41:38 crc kubenswrapper[4874]: E0217 16:41:38.460179 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:41:40 crc kubenswrapper[4874]: E0217 16:41:40.467443 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:41:51 crc kubenswrapper[4874]: E0217 16:41:51.459284 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:41:53 crc kubenswrapper[4874]: E0217 16:41:53.459587 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:41:57 crc kubenswrapper[4874]: I0217 16:41:57.725313 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:41:57 crc kubenswrapper[4874]: I0217 16:41:57.726007 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:04 crc kubenswrapper[4874]: E0217 16:42:04.461485 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:42:08 crc kubenswrapper[4874]: E0217 16:42:08.461587 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:42:16 crc kubenswrapper[4874]: E0217 16:42:16.462476 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:42:21 crc kubenswrapper[4874]: E0217 16:42:21.459674 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:42:27 crc kubenswrapper[4874]: I0217 16:42:27.725305 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:27 crc kubenswrapper[4874]: I0217 16:42:27.725913 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:28 crc kubenswrapper[4874]: E0217 16:42:28.459041 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:42:36 crc kubenswrapper[4874]: E0217 16:42:36.460783 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:42:42 crc kubenswrapper[4874]: E0217 16:42:42.459762 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:42:48 crc kubenswrapper[4874]: E0217 16:42:48.581849 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:42:54 crc kubenswrapper[4874]: E0217 16:42:54.460047 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:42:57 crc kubenswrapper[4874]: I0217 16:42:57.724968 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:57 crc kubenswrapper[4874]: I0217 16:42:57.725462 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:57 crc kubenswrapper[4874]: I0217 16:42:57.725549 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:42:57 crc kubenswrapper[4874]: I0217 16:42:57.726826 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:42:57 crc kubenswrapper[4874]: I0217 16:42:57.726890 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" gracePeriod=600 Feb 17 16:42:57 crc kubenswrapper[4874]: E0217 16:42:57.867653 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:42:58 crc kubenswrapper[4874]: I0217 16:42:58.871590 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" exitCode=0 Feb 17 16:42:58 crc kubenswrapper[4874]: I0217 16:42:58.871883 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3"} Feb 17 16:42:58 crc kubenswrapper[4874]: I0217 16:42:58.871913 4874 scope.go:117] "RemoveContainer" containerID="8814fc835f2282d6d41589f1cebfc57dc4e1d7a7e758d1b73c0eca3955883438" Feb 17 16:42:58 crc kubenswrapper[4874]: I0217 16:42:58.872379 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:42:58 crc kubenswrapper[4874]: E0217 16:42:58.872749 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:42:58 crc kubenswrapper[4874]: I0217 16:42:58.894881 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" podStartSLOduration=144.444442024 podStartE2EDuration="2m24.894853525s" podCreationTimestamp="2026-02-17 16:40:34 +0000 UTC" firstStartedPulling="2026-02-17 16:40:34.824160809 +0000 UTC m=+2245.118549380" lastFinishedPulling="2026-02-17 16:40:35.27457228 +0000 UTC m=+2245.568960881" observedRunningTime="2026-02-17 16:40:36.167548578 +0000 UTC m=+2246.461937199" watchObservedRunningTime="2026-02-17 16:42:58.894853525 +0000 UTC m=+2389.189242096" Feb 17 16:43:03 crc kubenswrapper[4874]: E0217 16:43:03.460965 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:43:08 crc kubenswrapper[4874]: E0217 16:43:08.460784 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:43:12 crc kubenswrapper[4874]: I0217 16:43:12.457727 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:43:12 crc kubenswrapper[4874]: E0217 16:43:12.458440 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:43:18 crc kubenswrapper[4874]: E0217 16:43:18.461421 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:43:21 crc kubenswrapper[4874]: E0217 16:43:21.458938 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:43:27 crc kubenswrapper[4874]: I0217 16:43:27.458232 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:43:27 crc kubenswrapper[4874]: E0217 16:43:27.459227 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:43:30 crc kubenswrapper[4874]: E0217 16:43:30.468838 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:43:36 crc kubenswrapper[4874]: E0217 16:43:36.460943 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:43:42 crc kubenswrapper[4874]: I0217 16:43:42.459299 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:43:42 crc kubenswrapper[4874]: E0217 16:43:42.459913 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:43:42 crc kubenswrapper[4874]: E0217 16:43:42.460594 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:43:48 crc kubenswrapper[4874]: E0217 16:43:48.459900 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:43:53 crc kubenswrapper[4874]: E0217 16:43:53.460452 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.130027 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p2cdp"] Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.133344 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.157034 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p2cdp"] Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.183165 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-catalog-content\") pod \"community-operators-p2cdp\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.183305 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhttp\" (UniqueName: \"kubernetes.io/projected/811d8507-dd13-411a-bf28-014c077ff4d5-kube-api-access-mhttp\") pod \"community-operators-p2cdp\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.183521 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-utilities\") pod \"community-operators-p2cdp\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.285665 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-catalog-content\") pod \"community-operators-p2cdp\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.285770 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhttp\" (UniqueName: \"kubernetes.io/projected/811d8507-dd13-411a-bf28-014c077ff4d5-kube-api-access-mhttp\") pod \"community-operators-p2cdp\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.285889 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-utilities\") pod \"community-operators-p2cdp\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.286519 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-utilities\") pod \"community-operators-p2cdp\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.286655 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-catalog-content\") pod \"community-operators-p2cdp\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.306115 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhttp\" (UniqueName: \"kubernetes.io/projected/811d8507-dd13-411a-bf28-014c077ff4d5-kube-api-access-mhttp\") pod \"community-operators-p2cdp\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.458529 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:43:54 crc kubenswrapper[4874]: E0217 16:43:54.459358 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:43:54 crc kubenswrapper[4874]: I0217 16:43:54.467443 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:43:55 crc kubenswrapper[4874]: I0217 16:43:55.023091 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p2cdp"] Feb 17 16:43:55 crc kubenswrapper[4874]: I0217 16:43:55.541835 4874 generic.go:334] "Generic (PLEG): container finished" podID="811d8507-dd13-411a-bf28-014c077ff4d5" containerID="bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e" exitCode=0 Feb 17 16:43:55 crc kubenswrapper[4874]: I0217 16:43:55.541943 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2cdp" event={"ID":"811d8507-dd13-411a-bf28-014c077ff4d5","Type":"ContainerDied","Data":"bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e"} Feb 17 16:43:55 crc kubenswrapper[4874]: I0217 16:43:55.542151 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2cdp" event={"ID":"811d8507-dd13-411a-bf28-014c077ff4d5","Type":"ContainerStarted","Data":"45b1c22060df3b8956ee2789a076c734affee6f44774e1194208ecb5fb0d2a46"} Feb 17 16:43:56 crc kubenswrapper[4874]: I0217 16:43:56.555842 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2cdp" event={"ID":"811d8507-dd13-411a-bf28-014c077ff4d5","Type":"ContainerStarted","Data":"8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e"} Feb 17 16:43:58 crc kubenswrapper[4874]: I0217 16:43:58.576639 4874 generic.go:334] "Generic (PLEG): container finished" podID="811d8507-dd13-411a-bf28-014c077ff4d5" containerID="8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e" exitCode=0 Feb 17 16:43:58 crc kubenswrapper[4874]: I0217 16:43:58.576686 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2cdp" event={"ID":"811d8507-dd13-411a-bf28-014c077ff4d5","Type":"ContainerDied","Data":"8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e"} Feb 17 16:43:59 crc kubenswrapper[4874]: I0217 16:43:59.594144 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2cdp" event={"ID":"811d8507-dd13-411a-bf28-014c077ff4d5","Type":"ContainerStarted","Data":"9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14"} Feb 17 16:43:59 crc kubenswrapper[4874]: I0217 16:43:59.645590 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p2cdp" podStartSLOduration=1.995851614 podStartE2EDuration="5.645551146s" podCreationTimestamp="2026-02-17 16:43:54 +0000 UTC" firstStartedPulling="2026-02-17 16:43:55.546161105 +0000 UTC m=+2445.840549706" lastFinishedPulling="2026-02-17 16:43:59.195860677 +0000 UTC m=+2449.490249238" observedRunningTime="2026-02-17 16:43:59.639432964 +0000 UTC m=+2449.933821575" watchObservedRunningTime="2026-02-17 16:43:59.645551146 +0000 UTC m=+2449.939939727" Feb 17 16:44:02 crc kubenswrapper[4874]: E0217 16:44:02.462213 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:44:04 crc kubenswrapper[4874]: I0217 16:44:04.479006 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:44:04 crc kubenswrapper[4874]: I0217 16:44:04.479473 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:44:04 crc kubenswrapper[4874]: I0217 16:44:04.528701 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:44:04 crc kubenswrapper[4874]: I0217 16:44:04.699535 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:44:04 crc kubenswrapper[4874]: I0217 16:44:04.771813 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p2cdp"] Feb 17 16:44:05 crc kubenswrapper[4874]: E0217 16:44:05.459856 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:44:06 crc kubenswrapper[4874]: I0217 16:44:06.675604 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p2cdp" podUID="811d8507-dd13-411a-bf28-014c077ff4d5" containerName="registry-server" containerID="cri-o://9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14" gracePeriod=2 Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.247233 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.418669 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-utilities\") pod \"811d8507-dd13-411a-bf28-014c077ff4d5\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.418818 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhttp\" (UniqueName: \"kubernetes.io/projected/811d8507-dd13-411a-bf28-014c077ff4d5-kube-api-access-mhttp\") pod \"811d8507-dd13-411a-bf28-014c077ff4d5\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.418870 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-catalog-content\") pod \"811d8507-dd13-411a-bf28-014c077ff4d5\" (UID: \"811d8507-dd13-411a-bf28-014c077ff4d5\") " Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.419725 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-utilities" (OuterVolumeSpecName: "utilities") pod "811d8507-dd13-411a-bf28-014c077ff4d5" (UID: "811d8507-dd13-411a-bf28-014c077ff4d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.424250 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/811d8507-dd13-411a-bf28-014c077ff4d5-kube-api-access-mhttp" (OuterVolumeSpecName: "kube-api-access-mhttp") pod "811d8507-dd13-411a-bf28-014c077ff4d5" (UID: "811d8507-dd13-411a-bf28-014c077ff4d5"). InnerVolumeSpecName "kube-api-access-mhttp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.468345 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "811d8507-dd13-411a-bf28-014c077ff4d5" (UID: "811d8507-dd13-411a-bf28-014c077ff4d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.521853 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.521888 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhttp\" (UniqueName: \"kubernetes.io/projected/811d8507-dd13-411a-bf28-014c077ff4d5-kube-api-access-mhttp\") on node \"crc\" DevicePath \"\"" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.521899 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/811d8507-dd13-411a-bf28-014c077ff4d5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.691967 4874 generic.go:334] "Generic (PLEG): container finished" podID="811d8507-dd13-411a-bf28-014c077ff4d5" containerID="9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14" exitCode=0 Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.692122 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2cdp" event={"ID":"811d8507-dd13-411a-bf28-014c077ff4d5","Type":"ContainerDied","Data":"9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14"} Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.692507 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p2cdp" event={"ID":"811d8507-dd13-411a-bf28-014c077ff4d5","Type":"ContainerDied","Data":"45b1c22060df3b8956ee2789a076c734affee6f44774e1194208ecb5fb0d2a46"} Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.692563 4874 scope.go:117] "RemoveContainer" containerID="9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.692231 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p2cdp" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.764131 4874 scope.go:117] "RemoveContainer" containerID="8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.765895 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p2cdp"] Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.783255 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p2cdp"] Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.795708 4874 scope.go:117] "RemoveContainer" containerID="bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.842916 4874 scope.go:117] "RemoveContainer" containerID="9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14" Feb 17 16:44:07 crc kubenswrapper[4874]: E0217 16:44:07.843376 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14\": container with ID starting with 9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14 not found: ID does not exist" containerID="9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.843432 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14"} err="failed to get container status \"9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14\": rpc error: code = NotFound desc = could not find container \"9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14\": container with ID starting with 9c34e742b599ffcf3c75cf244b85a26085d4f69bbec947e648057ead879eee14 not found: ID does not exist" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.843472 4874 scope.go:117] "RemoveContainer" containerID="8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e" Feb 17 16:44:07 crc kubenswrapper[4874]: E0217 16:44:07.843806 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e\": container with ID starting with 8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e not found: ID does not exist" containerID="8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.843838 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e"} err="failed to get container status \"8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e\": rpc error: code = NotFound desc = could not find container \"8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e\": container with ID starting with 8fffdb934ebf28ddda8b85ef3bd8b8cbb0337ea3521525ca9b4ea2200ae52f0e not found: ID does not exist" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.843857 4874 scope.go:117] "RemoveContainer" containerID="bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e" Feb 17 16:44:07 crc kubenswrapper[4874]: E0217 16:44:07.844212 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e\": container with ID starting with bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e not found: ID does not exist" containerID="bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e" Feb 17 16:44:07 crc kubenswrapper[4874]: I0217 16:44:07.844261 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e"} err="failed to get container status \"bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e\": rpc error: code = NotFound desc = could not find container \"bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e\": container with ID starting with bfe29c620f9baa92ec88752703ccc372f8bd0029d975e394cd1b1b76dd1a449e not found: ID does not exist" Feb 17 16:44:08 crc kubenswrapper[4874]: I0217 16:44:08.457739 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:44:08 crc kubenswrapper[4874]: E0217 16:44:08.458247 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:44:08 crc kubenswrapper[4874]: I0217 16:44:08.479187 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="811d8507-dd13-411a-bf28-014c077ff4d5" path="/var/lib/kubelet/pods/811d8507-dd13-411a-bf28-014c077ff4d5/volumes" Feb 17 16:44:16 crc kubenswrapper[4874]: E0217 16:44:16.460383 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:44:19 crc kubenswrapper[4874]: I0217 16:44:19.461062 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:44:19 crc kubenswrapper[4874]: E0217 16:44:19.597492 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:44:19 crc kubenswrapper[4874]: E0217 16:44:19.597601 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:44:19 crc kubenswrapper[4874]: E0217 16:44:19.597816 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:44:19 crc kubenswrapper[4874]: E0217 16:44:19.599644 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:44:22 crc kubenswrapper[4874]: I0217 16:44:22.457714 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:44:22 crc kubenswrapper[4874]: E0217 16:44:22.458336 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:44:28 crc kubenswrapper[4874]: E0217 16:44:28.582469 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:44:28 crc kubenswrapper[4874]: E0217 16:44:28.582992 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:44:28 crc kubenswrapper[4874]: E0217 16:44:28.583116 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:44:28 crc kubenswrapper[4874]: E0217 16:44:28.584308 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:44:31 crc kubenswrapper[4874]: E0217 16:44:31.459976 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:44:33 crc kubenswrapper[4874]: I0217 16:44:33.458200 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:44:33 crc kubenswrapper[4874]: E0217 16:44:33.460449 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:44:40 crc kubenswrapper[4874]: E0217 16:44:40.468936 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:44:42 crc kubenswrapper[4874]: E0217 16:44:42.460890 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:44:47 crc kubenswrapper[4874]: I0217 16:44:47.457205 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:44:47 crc kubenswrapper[4874]: E0217 16:44:47.457931 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:44:54 crc kubenswrapper[4874]: E0217 16:44:54.462096 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:44:56 crc kubenswrapper[4874]: E0217 16:44:56.463167 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.148025 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn"] Feb 17 16:45:00 crc kubenswrapper[4874]: E0217 16:45:00.149130 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811d8507-dd13-411a-bf28-014c077ff4d5" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.149145 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="811d8507-dd13-411a-bf28-014c077ff4d5" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4874]: E0217 16:45:00.149165 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811d8507-dd13-411a-bf28-014c077ff4d5" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.149172 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="811d8507-dd13-411a-bf28-014c077ff4d5" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4874]: E0217 16:45:00.149216 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811d8507-dd13-411a-bf28-014c077ff4d5" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.149223 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="811d8507-dd13-411a-bf28-014c077ff4d5" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.149503 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="811d8507-dd13-411a-bf28-014c077ff4d5" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.150403 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.152283 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.152663 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.158768 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn"] Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.175665 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh5pv\" (UniqueName: \"kubernetes.io/projected/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-kube-api-access-vh5pv\") pod \"collect-profiles-29522445-dkqrn\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.175763 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-secret-volume\") pod \"collect-profiles-29522445-dkqrn\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.175913 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-config-volume\") pod \"collect-profiles-29522445-dkqrn\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.277544 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-config-volume\") pod \"collect-profiles-29522445-dkqrn\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.278319 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh5pv\" (UniqueName: \"kubernetes.io/projected/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-kube-api-access-vh5pv\") pod \"collect-profiles-29522445-dkqrn\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.278524 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-secret-volume\") pod \"collect-profiles-29522445-dkqrn\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.278633 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-config-volume\") pod \"collect-profiles-29522445-dkqrn\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.289306 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-secret-volume\") pod \"collect-profiles-29522445-dkqrn\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.297769 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh5pv\" (UniqueName: \"kubernetes.io/projected/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-kube-api-access-vh5pv\") pod \"collect-profiles-29522445-dkqrn\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.472517 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:00 crc kubenswrapper[4874]: I0217 16:45:00.977786 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn"] Feb 17 16:45:01 crc kubenswrapper[4874]: I0217 16:45:01.222328 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" event={"ID":"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2","Type":"ContainerStarted","Data":"c0369d495391485d299de3ea7d3464b9b8dc559d387bec2106d46142fa6d4799"} Feb 17 16:45:01 crc kubenswrapper[4874]: I0217 16:45:01.457371 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:45:01 crc kubenswrapper[4874]: E0217 16:45:01.457726 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:45:02 crc kubenswrapper[4874]: I0217 16:45:02.238823 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" event={"ID":"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2","Type":"ContainerStarted","Data":"936006a49a7e5abf8a7fefc1ec5fa5e4fe14abc46bb56cf8ca642097cd240cb0"} Feb 17 16:45:02 crc kubenswrapper[4874]: I0217 16:45:02.274160 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" podStartSLOduration=2.274134123 podStartE2EDuration="2.274134123s" podCreationTimestamp="2026-02-17 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:45:02.260843153 +0000 UTC m=+2512.555231724" watchObservedRunningTime="2026-02-17 16:45:02.274134123 +0000 UTC m=+2512.568522704" Feb 17 16:45:03 crc kubenswrapper[4874]: I0217 16:45:03.253341 4874 generic.go:334] "Generic (PLEG): container finished" podID="a2fd6a42-869b-4b7a-a3df-76e5f43b0da2" containerID="936006a49a7e5abf8a7fefc1ec5fa5e4fe14abc46bb56cf8ca642097cd240cb0" exitCode=0 Feb 17 16:45:03 crc kubenswrapper[4874]: I0217 16:45:03.253419 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" event={"ID":"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2","Type":"ContainerDied","Data":"936006a49a7e5abf8a7fefc1ec5fa5e4fe14abc46bb56cf8ca642097cd240cb0"} Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.748956 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.826207 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-config-volume\") pod \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.826376 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh5pv\" (UniqueName: \"kubernetes.io/projected/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-kube-api-access-vh5pv\") pod \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.826512 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-secret-volume\") pod \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\" (UID: \"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2\") " Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.827173 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-config-volume" (OuterVolumeSpecName: "config-volume") pod "a2fd6a42-869b-4b7a-a3df-76e5f43b0da2" (UID: "a2fd6a42-869b-4b7a-a3df-76e5f43b0da2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.829636 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.831863 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a2fd6a42-869b-4b7a-a3df-76e5f43b0da2" (UID: "a2fd6a42-869b-4b7a-a3df-76e5f43b0da2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.835697 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-kube-api-access-vh5pv" (OuterVolumeSpecName: "kube-api-access-vh5pv") pod "a2fd6a42-869b-4b7a-a3df-76e5f43b0da2" (UID: "a2fd6a42-869b-4b7a-a3df-76e5f43b0da2"). InnerVolumeSpecName "kube-api-access-vh5pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.932043 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh5pv\" (UniqueName: \"kubernetes.io/projected/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-kube-api-access-vh5pv\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:04 crc kubenswrapper[4874]: I0217 16:45:04.932094 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:05 crc kubenswrapper[4874]: I0217 16:45:05.273589 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" event={"ID":"a2fd6a42-869b-4b7a-a3df-76e5f43b0da2","Type":"ContainerDied","Data":"c0369d495391485d299de3ea7d3464b9b8dc559d387bec2106d46142fa6d4799"} Feb 17 16:45:05 crc kubenswrapper[4874]: I0217 16:45:05.273632 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0369d495391485d299de3ea7d3464b9b8dc559d387bec2106d46142fa6d4799" Feb 17 16:45:05 crc kubenswrapper[4874]: I0217 16:45:05.273663 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn" Feb 17 16:45:05 crc kubenswrapper[4874]: I0217 16:45:05.344067 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9"] Feb 17 16:45:05 crc kubenswrapper[4874]: I0217 16:45:05.353734 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b59b9"] Feb 17 16:45:06 crc kubenswrapper[4874]: I0217 16:45:06.475442 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2a3365-4901-45b8-b528-0961dad4cf66" path="/var/lib/kubelet/pods/3b2a3365-4901-45b8-b528-0961dad4cf66/volumes" Feb 17 16:45:07 crc kubenswrapper[4874]: E0217 16:45:07.459108 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:45:09 crc kubenswrapper[4874]: E0217 16:45:09.459576 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:45:15 crc kubenswrapper[4874]: I0217 16:45:15.457390 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:45:15 crc kubenswrapper[4874]: E0217 16:45:15.458204 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:45:19 crc kubenswrapper[4874]: E0217 16:45:19.460099 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.231495 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hlzqj"] Feb 17 16:45:20 crc kubenswrapper[4874]: E0217 16:45:20.232044 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fd6a42-869b-4b7a-a3df-76e5f43b0da2" containerName="collect-profiles" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.232061 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fd6a42-869b-4b7a-a3df-76e5f43b0da2" containerName="collect-profiles" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.232349 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fd6a42-869b-4b7a-a3df-76e5f43b0da2" containerName="collect-profiles" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.234257 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.253026 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hlzqj"] Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.319262 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-catalog-content\") pod \"certified-operators-hlzqj\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.319325 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-utilities\") pod \"certified-operators-hlzqj\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.319487 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9mv\" (UniqueName: \"kubernetes.io/projected/1c959900-b48f-4c7a-b248-a0850c76d844-kube-api-access-rk9mv\") pod \"certified-operators-hlzqj\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.421726 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-catalog-content\") pod \"certified-operators-hlzqj\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.421775 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-utilities\") pod \"certified-operators-hlzqj\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.421865 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk9mv\" (UniqueName: \"kubernetes.io/projected/1c959900-b48f-4c7a-b248-a0850c76d844-kube-api-access-rk9mv\") pod \"certified-operators-hlzqj\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.422313 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-catalog-content\") pod \"certified-operators-hlzqj\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.422346 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-utilities\") pod \"certified-operators-hlzqj\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.444770 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk9mv\" (UniqueName: \"kubernetes.io/projected/1c959900-b48f-4c7a-b248-a0850c76d844-kube-api-access-rk9mv\") pod \"certified-operators-hlzqj\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:20 crc kubenswrapper[4874]: I0217 16:45:20.560274 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:21 crc kubenswrapper[4874]: I0217 16:45:21.124783 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hlzqj"] Feb 17 16:45:21 crc kubenswrapper[4874]: W0217 16:45:21.125486 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c959900_b48f_4c7a_b248_a0850c76d844.slice/crio-6e2ccaefd6b126c0741933d77d91c1d39824ad9115e6915d47cef007e3c872ad WatchSource:0}: Error finding container 6e2ccaefd6b126c0741933d77d91c1d39824ad9115e6915d47cef007e3c872ad: Status 404 returned error can't find the container with id 6e2ccaefd6b126c0741933d77d91c1d39824ad9115e6915d47cef007e3c872ad Feb 17 16:45:21 crc kubenswrapper[4874]: I0217 16:45:21.434728 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlzqj" event={"ID":"1c959900-b48f-4c7a-b248-a0850c76d844","Type":"ContainerStarted","Data":"6e2ccaefd6b126c0741933d77d91c1d39824ad9115e6915d47cef007e3c872ad"} Feb 17 16:45:22 crc kubenswrapper[4874]: I0217 16:45:22.452530 4874 generic.go:334] "Generic (PLEG): container finished" podID="1c959900-b48f-4c7a-b248-a0850c76d844" containerID="f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149" exitCode=0 Feb 17 16:45:22 crc kubenswrapper[4874]: I0217 16:45:22.452593 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlzqj" event={"ID":"1c959900-b48f-4c7a-b248-a0850c76d844","Type":"ContainerDied","Data":"f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149"} Feb 17 16:45:22 crc kubenswrapper[4874]: E0217 16:45:22.459407 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:45:24 crc kubenswrapper[4874]: I0217 16:45:24.474291 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlzqj" event={"ID":"1c959900-b48f-4c7a-b248-a0850c76d844","Type":"ContainerStarted","Data":"902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55"} Feb 17 16:45:29 crc kubenswrapper[4874]: I0217 16:45:29.457961 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:45:29 crc kubenswrapper[4874]: E0217 16:45:29.458883 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:45:29 crc kubenswrapper[4874]: I0217 16:45:29.524227 4874 generic.go:334] "Generic (PLEG): container finished" podID="1c959900-b48f-4c7a-b248-a0850c76d844" containerID="902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55" exitCode=0 Feb 17 16:45:29 crc kubenswrapper[4874]: I0217 16:45:29.524276 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlzqj" event={"ID":"1c959900-b48f-4c7a-b248-a0850c76d844","Type":"ContainerDied","Data":"902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55"} Feb 17 16:45:31 crc kubenswrapper[4874]: I0217 16:45:31.545972 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlzqj" event={"ID":"1c959900-b48f-4c7a-b248-a0850c76d844","Type":"ContainerStarted","Data":"90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d"} Feb 17 16:45:31 crc kubenswrapper[4874]: I0217 16:45:31.576542 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hlzqj" podStartSLOduration=3.393227446 podStartE2EDuration="11.576522903s" podCreationTimestamp="2026-02-17 16:45:20 +0000 UTC" firstStartedPulling="2026-02-17 16:45:22.456198931 +0000 UTC m=+2532.750587492" lastFinishedPulling="2026-02-17 16:45:30.639494388 +0000 UTC m=+2540.933882949" observedRunningTime="2026-02-17 16:45:31.56872356 +0000 UTC m=+2541.863112141" watchObservedRunningTime="2026-02-17 16:45:31.576522903 +0000 UTC m=+2541.870911464" Feb 17 16:45:33 crc kubenswrapper[4874]: E0217 16:45:33.460139 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:45:34 crc kubenswrapper[4874]: E0217 16:45:34.459654 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:45:40 crc kubenswrapper[4874]: I0217 16:45:40.561332 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:40 crc kubenswrapper[4874]: I0217 16:45:40.561785 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:40 crc kubenswrapper[4874]: I0217 16:45:40.620772 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:40 crc kubenswrapper[4874]: I0217 16:45:40.680039 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:41 crc kubenswrapper[4874]: I0217 16:45:41.459132 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:45:41 crc kubenswrapper[4874]: E0217 16:45:41.459451 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.113161 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hlzqj"] Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.113899 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hlzqj" podUID="1c959900-b48f-4c7a-b248-a0850c76d844" containerName="registry-server" containerID="cri-o://90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d" gracePeriod=2 Feb 17 16:45:44 crc kubenswrapper[4874]: E0217 16:45:44.461660 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.660605 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.670824 4874 generic.go:334] "Generic (PLEG): container finished" podID="1c959900-b48f-4c7a-b248-a0850c76d844" containerID="90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d" exitCode=0 Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.670883 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlzqj" event={"ID":"1c959900-b48f-4c7a-b248-a0850c76d844","Type":"ContainerDied","Data":"90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d"} Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.670908 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hlzqj" event={"ID":"1c959900-b48f-4c7a-b248-a0850c76d844","Type":"ContainerDied","Data":"6e2ccaefd6b126c0741933d77d91c1d39824ad9115e6915d47cef007e3c872ad"} Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.670922 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hlzqj" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.670941 4874 scope.go:117] "RemoveContainer" containerID="90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.713650 4874 scope.go:117] "RemoveContainer" containerID="902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.737152 4874 scope.go:117] "RemoveContainer" containerID="f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.765536 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk9mv\" (UniqueName: \"kubernetes.io/projected/1c959900-b48f-4c7a-b248-a0850c76d844-kube-api-access-rk9mv\") pod \"1c959900-b48f-4c7a-b248-a0850c76d844\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.765747 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-catalog-content\") pod \"1c959900-b48f-4c7a-b248-a0850c76d844\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.765837 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-utilities\") pod \"1c959900-b48f-4c7a-b248-a0850c76d844\" (UID: \"1c959900-b48f-4c7a-b248-a0850c76d844\") " Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.766716 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-utilities" (OuterVolumeSpecName: "utilities") pod "1c959900-b48f-4c7a-b248-a0850c76d844" (UID: "1c959900-b48f-4c7a-b248-a0850c76d844"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.771364 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c959900-b48f-4c7a-b248-a0850c76d844-kube-api-access-rk9mv" (OuterVolumeSpecName: "kube-api-access-rk9mv") pod "1c959900-b48f-4c7a-b248-a0850c76d844" (UID: "1c959900-b48f-4c7a-b248-a0850c76d844"). InnerVolumeSpecName "kube-api-access-rk9mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.826334 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c959900-b48f-4c7a-b248-a0850c76d844" (UID: "1c959900-b48f-4c7a-b248-a0850c76d844"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.869159 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk9mv\" (UniqueName: \"kubernetes.io/projected/1c959900-b48f-4c7a-b248-a0850c76d844-kube-api-access-rk9mv\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.869202 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.869216 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c959900-b48f-4c7a-b248-a0850c76d844-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.879984 4874 scope.go:117] "RemoveContainer" containerID="90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d" Feb 17 16:45:44 crc kubenswrapper[4874]: E0217 16:45:44.880515 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d\": container with ID starting with 90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d not found: ID does not exist" containerID="90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.880576 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d"} err="failed to get container status \"90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d\": rpc error: code = NotFound desc = could not find container \"90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d\": container with ID starting with 90b6f5d4dec44eb9a00f3825bd6a3d738eff47feeb822843190dd578b4aaf85d not found: ID does not exist" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.880624 4874 scope.go:117] "RemoveContainer" containerID="902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55" Feb 17 16:45:44 crc kubenswrapper[4874]: E0217 16:45:44.880984 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55\": container with ID starting with 902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55 not found: ID does not exist" containerID="902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.881029 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55"} err="failed to get container status \"902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55\": rpc error: code = NotFound desc = could not find container \"902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55\": container with ID starting with 902c11ac085ef9e83ba475d761aa29e53a8ee86373325372cb297f53954bff55 not found: ID does not exist" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.881045 4874 scope.go:117] "RemoveContainer" containerID="f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149" Feb 17 16:45:44 crc kubenswrapper[4874]: E0217 16:45:44.881328 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149\": container with ID starting with f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149 not found: ID does not exist" containerID="f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149" Feb 17 16:45:44 crc kubenswrapper[4874]: I0217 16:45:44.881375 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149"} err="failed to get container status \"f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149\": rpc error: code = NotFound desc = could not find container \"f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149\": container with ID starting with f885e85ba10be08fccc435df4b40e40e4a54b86ce47528d2b9e776fa18b4f149 not found: ID does not exist" Feb 17 16:45:45 crc kubenswrapper[4874]: I0217 16:45:45.005876 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hlzqj"] Feb 17 16:45:45 crc kubenswrapper[4874]: I0217 16:45:45.015658 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hlzqj"] Feb 17 16:45:46 crc kubenswrapper[4874]: E0217 16:45:46.459624 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:45:46 crc kubenswrapper[4874]: I0217 16:45:46.483884 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c959900-b48f-4c7a-b248-a0850c76d844" path="/var/lib/kubelet/pods/1c959900-b48f-4c7a-b248-a0850c76d844/volumes" Feb 17 16:45:50 crc kubenswrapper[4874]: I0217 16:45:50.242472 4874 scope.go:117] "RemoveContainer" containerID="7dbb1cdd0c6aed40daa7f6d829bcfa1c8c3e7d91e4b800c7ec7cad4b2e12ece2" Feb 17 16:45:56 crc kubenswrapper[4874]: I0217 16:45:56.458027 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:45:56 crc kubenswrapper[4874]: E0217 16:45:56.458911 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:45:58 crc kubenswrapper[4874]: E0217 16:45:58.460363 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:45:59 crc kubenswrapper[4874]: E0217 16:45:59.460969 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:46:10 crc kubenswrapper[4874]: E0217 16:46:10.499293 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:46:11 crc kubenswrapper[4874]: I0217 16:46:11.458012 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:46:11 crc kubenswrapper[4874]: E0217 16:46:11.458480 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:46:13 crc kubenswrapper[4874]: E0217 16:46:13.458683 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:46:23 crc kubenswrapper[4874]: I0217 16:46:23.457887 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:46:23 crc kubenswrapper[4874]: E0217 16:46:23.458713 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:46:25 crc kubenswrapper[4874]: E0217 16:46:25.459209 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:46:25 crc kubenswrapper[4874]: E0217 16:46:25.459209 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:46:36 crc kubenswrapper[4874]: I0217 16:46:36.457359 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:46:36 crc kubenswrapper[4874]: E0217 16:46:36.458235 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:46:39 crc kubenswrapper[4874]: E0217 16:46:39.460555 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:46:40 crc kubenswrapper[4874]: E0217 16:46:40.469358 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:46:51 crc kubenswrapper[4874]: I0217 16:46:51.459258 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:46:51 crc kubenswrapper[4874]: E0217 16:46:51.460227 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:46:51 crc kubenswrapper[4874]: E0217 16:46:51.461308 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:46:55 crc kubenswrapper[4874]: E0217 16:46:55.459843 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:46:58 crc kubenswrapper[4874]: I0217 16:46:58.505198 4874 generic.go:334] "Generic (PLEG): container finished" podID="0d027e77-e298-4ee6-bad9-b12332cc3a81" containerID="a37dca808fc52f4b680140c6e4d572e61425608ff1cfe99d8f81df5486f9607e" exitCode=2 Feb 17 16:46:58 crc kubenswrapper[4874]: I0217 16:46:58.505568 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" event={"ID":"0d027e77-e298-4ee6-bad9-b12332cc3a81","Type":"ContainerDied","Data":"a37dca808fc52f4b680140c6e4d572e61425608ff1cfe99d8f81df5486f9607e"} Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.191120 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.352009 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-inventory\") pod \"0d027e77-e298-4ee6-bad9-b12332cc3a81\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.352412 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-ssh-key-openstack-edpm-ipam\") pod \"0d027e77-e298-4ee6-bad9-b12332cc3a81\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.352669 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7ndw\" (UniqueName: \"kubernetes.io/projected/0d027e77-e298-4ee6-bad9-b12332cc3a81-kube-api-access-n7ndw\") pod \"0d027e77-e298-4ee6-bad9-b12332cc3a81\" (UID: \"0d027e77-e298-4ee6-bad9-b12332cc3a81\") " Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.357393 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d027e77-e298-4ee6-bad9-b12332cc3a81-kube-api-access-n7ndw" (OuterVolumeSpecName: "kube-api-access-n7ndw") pod "0d027e77-e298-4ee6-bad9-b12332cc3a81" (UID: "0d027e77-e298-4ee6-bad9-b12332cc3a81"). InnerVolumeSpecName "kube-api-access-n7ndw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.387965 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d027e77-e298-4ee6-bad9-b12332cc3a81" (UID: "0d027e77-e298-4ee6-bad9-b12332cc3a81"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.404922 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-inventory" (OuterVolumeSpecName: "inventory") pod "0d027e77-e298-4ee6-bad9-b12332cc3a81" (UID: "0d027e77-e298-4ee6-bad9-b12332cc3a81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.455678 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.455720 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d027e77-e298-4ee6-bad9-b12332cc3a81-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.455736 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7ndw\" (UniqueName: \"kubernetes.io/projected/0d027e77-e298-4ee6-bad9-b12332cc3a81-kube-api-access-n7ndw\") on node \"crc\" DevicePath \"\"" Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.557522 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" event={"ID":"0d027e77-e298-4ee6-bad9-b12332cc3a81","Type":"ContainerDied","Data":"d1a3ab177cb8125ecf4e8b96c8829c58f4189e88e881e8052b42a1f7ddffc580"} Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.557797 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a3ab177cb8125ecf4e8b96c8829c58f4189e88e881e8052b42a1f7ddffc580" Feb 17 16:47:00 crc kubenswrapper[4874]: I0217 16:47:00.557614 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rspxw" Feb 17 16:47:02 crc kubenswrapper[4874]: I0217 16:47:02.473771 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:47:02 crc kubenswrapper[4874]: E0217 16:47:02.474814 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:47:03 crc kubenswrapper[4874]: E0217 16:47:03.463935 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:47:11 crc kubenswrapper[4874]: E0217 16:47:11.460045 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:47:14 crc kubenswrapper[4874]: I0217 16:47:14.458337 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:47:14 crc kubenswrapper[4874]: E0217 16:47:14.458780 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:47:16 crc kubenswrapper[4874]: E0217 16:47:16.460161 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.040516 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c"] Feb 17 16:47:17 crc kubenswrapper[4874]: E0217 16:47:17.041153 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c959900-b48f-4c7a-b248-a0850c76d844" containerName="registry-server" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.041182 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c959900-b48f-4c7a-b248-a0850c76d844" containerName="registry-server" Feb 17 16:47:17 crc kubenswrapper[4874]: E0217 16:47:17.041226 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c959900-b48f-4c7a-b248-a0850c76d844" containerName="extract-utilities" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.041236 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c959900-b48f-4c7a-b248-a0850c76d844" containerName="extract-utilities" Feb 17 16:47:17 crc kubenswrapper[4874]: E0217 16:47:17.041254 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d027e77-e298-4ee6-bad9-b12332cc3a81" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.041264 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d027e77-e298-4ee6-bad9-b12332cc3a81" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:47:17 crc kubenswrapper[4874]: E0217 16:47:17.041298 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c959900-b48f-4c7a-b248-a0850c76d844" containerName="extract-content" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.041308 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c959900-b48f-4c7a-b248-a0850c76d844" containerName="extract-content" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.041697 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c959900-b48f-4c7a-b248-a0850c76d844" containerName="registry-server" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.041728 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d027e77-e298-4ee6-bad9-b12332cc3a81" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.042951 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.046669 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.046666 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.046830 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.048160 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dsqj4" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.075321 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c"] Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.098573 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n7x4\" (UniqueName: \"kubernetes.io/projected/22a145b3-1fbd-43be-9c83-9a04d4506430-kube-api-access-8n7x4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.098925 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.098962 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.203191 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n7x4\" (UniqueName: \"kubernetes.io/projected/22a145b3-1fbd-43be-9c83-9a04d4506430-kube-api-access-8n7x4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.204048 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.204116 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.212990 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.213483 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.223278 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n7x4\" (UniqueName: \"kubernetes.io/projected/22a145b3-1fbd-43be-9c83-9a04d4506430-kube-api-access-8n7x4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.377020 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:47:17 crc kubenswrapper[4874]: I0217 16:47:17.940654 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c"] Feb 17 16:47:18 crc kubenswrapper[4874]: I0217 16:47:18.734282 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" event={"ID":"22a145b3-1fbd-43be-9c83-9a04d4506430","Type":"ContainerStarted","Data":"06020d29ed795e8f7f90e846160ce3091c6a759eb3f94fb5cc9cc76d24fb4d33"} Feb 17 16:47:19 crc kubenswrapper[4874]: I0217 16:47:19.744238 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" event={"ID":"22a145b3-1fbd-43be-9c83-9a04d4506430","Type":"ContainerStarted","Data":"c9b2da86d5b165c9cda4debe4feba01f9799aef900dcbe4ef12edce2ef559820"} Feb 17 16:47:19 crc kubenswrapper[4874]: I0217 16:47:19.767967 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" podStartSLOduration=2.302572832 podStartE2EDuration="2.767949962s" podCreationTimestamp="2026-02-17 16:47:17 +0000 UTC" firstStartedPulling="2026-02-17 16:47:17.947629704 +0000 UTC m=+2648.242018265" lastFinishedPulling="2026-02-17 16:47:18.413006834 +0000 UTC m=+2648.707395395" observedRunningTime="2026-02-17 16:47:19.76385838 +0000 UTC m=+2650.058246941" watchObservedRunningTime="2026-02-17 16:47:19.767949962 +0000 UTC m=+2650.062338513" Feb 17 16:47:22 crc kubenswrapper[4874]: E0217 16:47:22.459819 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:47:28 crc kubenswrapper[4874]: I0217 16:47:28.457743 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:47:28 crc kubenswrapper[4874]: E0217 16:47:28.458627 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:47:28 crc kubenswrapper[4874]: E0217 16:47:28.459981 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:47:34 crc kubenswrapper[4874]: E0217 16:47:34.461025 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:47:39 crc kubenswrapper[4874]: I0217 16:47:39.458247 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:47:39 crc kubenswrapper[4874]: E0217 16:47:39.459004 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:47:40 crc kubenswrapper[4874]: E0217 16:47:40.470720 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:47:45 crc kubenswrapper[4874]: E0217 16:47:45.459289 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:47:53 crc kubenswrapper[4874]: E0217 16:47:53.460594 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:47:54 crc kubenswrapper[4874]: I0217 16:47:54.458374 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:47:54 crc kubenswrapper[4874]: E0217 16:47:54.459402 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:47:59 crc kubenswrapper[4874]: E0217 16:47:59.460248 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:48:06 crc kubenswrapper[4874]: E0217 16:48:06.459135 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:48:09 crc kubenswrapper[4874]: I0217 16:48:09.458632 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:48:10 crc kubenswrapper[4874]: I0217 16:48:10.392909 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"14f801991ed13f0df4772429c4adbfe835f5a9746c0f47d04396777f4307053f"} Feb 17 16:48:10 crc kubenswrapper[4874]: E0217 16:48:10.471747 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:48:18 crc kubenswrapper[4874]: E0217 16:48:18.459594 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:48:25 crc kubenswrapper[4874]: E0217 16:48:25.460625 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:48:33 crc kubenswrapper[4874]: E0217 16:48:33.459559 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:48:37 crc kubenswrapper[4874]: E0217 16:48:37.460084 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:48:46 crc kubenswrapper[4874]: E0217 16:48:46.462980 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:48:48 crc kubenswrapper[4874]: E0217 16:48:48.461209 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:48:58 crc kubenswrapper[4874]: E0217 16:48:58.460840 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:49:02 crc kubenswrapper[4874]: E0217 16:49:02.460040 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:49:11 crc kubenswrapper[4874]: E0217 16:49:11.460869 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:49:13 crc kubenswrapper[4874]: E0217 16:49:13.459443 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:49:26 crc kubenswrapper[4874]: I0217 16:49:26.461304 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:49:26 crc kubenswrapper[4874]: E0217 16:49:26.461466 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:49:26 crc kubenswrapper[4874]: E0217 16:49:26.572718 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:49:26 crc kubenswrapper[4874]: E0217 16:49:26.572781 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:49:26 crc kubenswrapper[4874]: E0217 16:49:26.572936 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:49:26 crc kubenswrapper[4874]: E0217 16:49:26.574135 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:49:38 crc kubenswrapper[4874]: E0217 16:49:38.460656 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:49:39 crc kubenswrapper[4874]: E0217 16:49:39.591302 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:49:39 crc kubenswrapper[4874]: E0217 16:49:39.592153 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:49:39 crc kubenswrapper[4874]: E0217 16:49:39.592391 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:49:39 crc kubenswrapper[4874]: E0217 16:49:39.593763 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:49:50 crc kubenswrapper[4874]: E0217 16:49:50.462729 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:49:50 crc kubenswrapper[4874]: E0217 16:49:50.463532 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:50:01 crc kubenswrapper[4874]: E0217 16:50:01.458854 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:50:02 crc kubenswrapper[4874]: I0217 16:50:02.994143 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r822l"] Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.053248 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r822l"] Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.053360 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.199442 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-catalog-content\") pod \"redhat-operators-r822l\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.199550 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-utilities\") pod \"redhat-operators-r822l\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.199611 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg295\" (UniqueName: \"kubernetes.io/projected/9336354d-6c4f-4fd7-a269-0059db00bff1-kube-api-access-xg295\") pod \"redhat-operators-r822l\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.301725 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-catalog-content\") pod \"redhat-operators-r822l\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.301830 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-utilities\") pod \"redhat-operators-r822l\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.301885 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg295\" (UniqueName: \"kubernetes.io/projected/9336354d-6c4f-4fd7-a269-0059db00bff1-kube-api-access-xg295\") pod \"redhat-operators-r822l\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.302745 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-catalog-content\") pod \"redhat-operators-r822l\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.302995 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-utilities\") pod \"redhat-operators-r822l\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.378454 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg295\" (UniqueName: \"kubernetes.io/projected/9336354d-6c4f-4fd7-a269-0059db00bff1-kube-api-access-xg295\") pod \"redhat-operators-r822l\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.379171 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:03 crc kubenswrapper[4874]: I0217 16:50:03.879355 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r822l"] Feb 17 16:50:04 crc kubenswrapper[4874]: I0217 16:50:04.693839 4874 generic.go:334] "Generic (PLEG): container finished" podID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerID="ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4" exitCode=0 Feb 17 16:50:04 crc kubenswrapper[4874]: I0217 16:50:04.694144 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r822l" event={"ID":"9336354d-6c4f-4fd7-a269-0059db00bff1","Type":"ContainerDied","Data":"ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4"} Feb 17 16:50:04 crc kubenswrapper[4874]: I0217 16:50:04.694172 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r822l" event={"ID":"9336354d-6c4f-4fd7-a269-0059db00bff1","Type":"ContainerStarted","Data":"a83b1c310127c5d5cd45b1f47e764c874326ac1eaaadfeeca8fb70e19976fc2a"} Feb 17 16:50:05 crc kubenswrapper[4874]: E0217 16:50:05.460009 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:50:05 crc kubenswrapper[4874]: I0217 16:50:05.705954 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r822l" event={"ID":"9336354d-6c4f-4fd7-a269-0059db00bff1","Type":"ContainerStarted","Data":"bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045"} Feb 17 16:50:11 crc kubenswrapper[4874]: I0217 16:50:11.198060 4874 generic.go:334] "Generic (PLEG): container finished" podID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerID="bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045" exitCode=0 Feb 17 16:50:11 crc kubenswrapper[4874]: I0217 16:50:11.198148 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r822l" event={"ID":"9336354d-6c4f-4fd7-a269-0059db00bff1","Type":"ContainerDied","Data":"bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045"} Feb 17 16:50:12 crc kubenswrapper[4874]: I0217 16:50:12.211648 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r822l" event={"ID":"9336354d-6c4f-4fd7-a269-0059db00bff1","Type":"ContainerStarted","Data":"e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9"} Feb 17 16:50:12 crc kubenswrapper[4874]: I0217 16:50:12.237103 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r822l" podStartSLOduration=3.340194344 podStartE2EDuration="10.237053292s" podCreationTimestamp="2026-02-17 16:50:02 +0000 UTC" firstStartedPulling="2026-02-17 16:50:04.696210235 +0000 UTC m=+2814.990598796" lastFinishedPulling="2026-02-17 16:50:11.593069183 +0000 UTC m=+2821.887457744" observedRunningTime="2026-02-17 16:50:12.23253813 +0000 UTC m=+2822.526926691" watchObservedRunningTime="2026-02-17 16:50:12.237053292 +0000 UTC m=+2822.531441863" Feb 17 16:50:12 crc kubenswrapper[4874]: E0217 16:50:12.460030 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:50:13 crc kubenswrapper[4874]: I0217 16:50:13.380413 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:13 crc kubenswrapper[4874]: I0217 16:50:13.380457 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:14 crc kubenswrapper[4874]: I0217 16:50:14.435019 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r822l" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="registry-server" probeResult="failure" output=< Feb 17 16:50:14 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:50:14 crc kubenswrapper[4874]: > Feb 17 16:50:18 crc kubenswrapper[4874]: E0217 16:50:18.459834 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:50:24 crc kubenswrapper[4874]: I0217 16:50:24.430840 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r822l" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="registry-server" probeResult="failure" output=< Feb 17 16:50:24 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:50:24 crc kubenswrapper[4874]: > Feb 17 16:50:24 crc kubenswrapper[4874]: E0217 16:50:24.462913 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:50:27 crc kubenswrapper[4874]: I0217 16:50:27.725062 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:50:27 crc kubenswrapper[4874]: I0217 16:50:27.725716 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:50:31 crc kubenswrapper[4874]: E0217 16:50:31.460525 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:50:34 crc kubenswrapper[4874]: I0217 16:50:34.443679 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r822l" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="registry-server" probeResult="failure" output=< Feb 17 16:50:34 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 16:50:34 crc kubenswrapper[4874]: > Feb 17 16:50:35 crc kubenswrapper[4874]: E0217 16:50:35.459929 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:50:43 crc kubenswrapper[4874]: I0217 16:50:43.437876 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:43 crc kubenswrapper[4874]: I0217 16:50:43.504895 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:43 crc kubenswrapper[4874]: I0217 16:50:43.682409 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r822l"] Feb 17 16:50:44 crc kubenswrapper[4874]: E0217 16:50:44.460156 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:50:44 crc kubenswrapper[4874]: I0217 16:50:44.583893 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r822l" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="registry-server" containerID="cri-o://e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9" gracePeriod=2 Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.300898 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.368614 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-catalog-content\") pod \"9336354d-6c4f-4fd7-a269-0059db00bff1\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.368793 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-utilities\") pod \"9336354d-6c4f-4fd7-a269-0059db00bff1\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.368929 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg295\" (UniqueName: \"kubernetes.io/projected/9336354d-6c4f-4fd7-a269-0059db00bff1-kube-api-access-xg295\") pod \"9336354d-6c4f-4fd7-a269-0059db00bff1\" (UID: \"9336354d-6c4f-4fd7-a269-0059db00bff1\") " Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.370227 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-utilities" (OuterVolumeSpecName: "utilities") pod "9336354d-6c4f-4fd7-a269-0059db00bff1" (UID: "9336354d-6c4f-4fd7-a269-0059db00bff1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.379895 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9336354d-6c4f-4fd7-a269-0059db00bff1-kube-api-access-xg295" (OuterVolumeSpecName: "kube-api-access-xg295") pod "9336354d-6c4f-4fd7-a269-0059db00bff1" (UID: "9336354d-6c4f-4fd7-a269-0059db00bff1"). InnerVolumeSpecName "kube-api-access-xg295". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.471549 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.471578 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg295\" (UniqueName: \"kubernetes.io/projected/9336354d-6c4f-4fd7-a269-0059db00bff1-kube-api-access-xg295\") on node \"crc\" DevicePath \"\"" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.486947 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9336354d-6c4f-4fd7-a269-0059db00bff1" (UID: "9336354d-6c4f-4fd7-a269-0059db00bff1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.573900 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9336354d-6c4f-4fd7-a269-0059db00bff1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.599149 4874 generic.go:334] "Generic (PLEG): container finished" podID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerID="e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9" exitCode=0 Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.599197 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r822l" event={"ID":"9336354d-6c4f-4fd7-a269-0059db00bff1","Type":"ContainerDied","Data":"e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9"} Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.599224 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r822l" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.599229 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r822l" event={"ID":"9336354d-6c4f-4fd7-a269-0059db00bff1","Type":"ContainerDied","Data":"a83b1c310127c5d5cd45b1f47e764c874326ac1eaaadfeeca8fb70e19976fc2a"} Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.599243 4874 scope.go:117] "RemoveContainer" containerID="e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.622068 4874 scope.go:117] "RemoveContainer" containerID="bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.640541 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r822l"] Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.651533 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r822l"] Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.665434 4874 scope.go:117] "RemoveContainer" containerID="ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.703490 4874 scope.go:117] "RemoveContainer" containerID="e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9" Feb 17 16:50:45 crc kubenswrapper[4874]: E0217 16:50:45.704070 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9\": container with ID starting with e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9 not found: ID does not exist" containerID="e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.704170 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9"} err="failed to get container status \"e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9\": rpc error: code = NotFound desc = could not find container \"e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9\": container with ID starting with e84bd8b32758b7fda79cec80324ea7390afbfd748759ad51003551bd2dab2fe9 not found: ID does not exist" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.704218 4874 scope.go:117] "RemoveContainer" containerID="bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045" Feb 17 16:50:45 crc kubenswrapper[4874]: E0217 16:50:45.704774 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045\": container with ID starting with bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045 not found: ID does not exist" containerID="bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.704825 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045"} err="failed to get container status \"bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045\": rpc error: code = NotFound desc = could not find container \"bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045\": container with ID starting with bc30271082b0190a841a5982ed8045958cbd88a5e672d569ca2fd19fe25d9045 not found: ID does not exist" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.704867 4874 scope.go:117] "RemoveContainer" containerID="ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4" Feb 17 16:50:45 crc kubenswrapper[4874]: E0217 16:50:45.705233 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4\": container with ID starting with ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4 not found: ID does not exist" containerID="ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4" Feb 17 16:50:45 crc kubenswrapper[4874]: I0217 16:50:45.705256 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4"} err="failed to get container status \"ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4\": rpc error: code = NotFound desc = could not find container \"ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4\": container with ID starting with ae623e09ffe197924cb09a574233d390e9b10f6d45610891f92a1213d65b31d4 not found: ID does not exist" Feb 17 16:50:46 crc kubenswrapper[4874]: I0217 16:50:46.477433 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" path="/var/lib/kubelet/pods/9336354d-6c4f-4fd7-a269-0059db00bff1/volumes" Feb 17 16:50:49 crc kubenswrapper[4874]: E0217 16:50:49.460237 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:50:56 crc kubenswrapper[4874]: E0217 16:50:56.460745 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:50:57 crc kubenswrapper[4874]: I0217 16:50:57.724700 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:50:57 crc kubenswrapper[4874]: I0217 16:50:57.725579 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:02 crc kubenswrapper[4874]: E0217 16:51:02.460260 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:51:09 crc kubenswrapper[4874]: E0217 16:51:09.460600 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:51:14 crc kubenswrapper[4874]: E0217 16:51:14.460181 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:51:21 crc kubenswrapper[4874]: E0217 16:51:21.461847 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:51:27 crc kubenswrapper[4874]: E0217 16:51:27.461674 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:51:27 crc kubenswrapper[4874]: I0217 16:51:27.724691 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:27 crc kubenswrapper[4874]: I0217 16:51:27.724761 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:27 crc kubenswrapper[4874]: I0217 16:51:27.724819 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:51:27 crc kubenswrapper[4874]: I0217 16:51:27.725911 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14f801991ed13f0df4772429c4adbfe835f5a9746c0f47d04396777f4307053f"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:51:27 crc kubenswrapper[4874]: I0217 16:51:27.725993 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://14f801991ed13f0df4772429c4adbfe835f5a9746c0f47d04396777f4307053f" gracePeriod=600 Feb 17 16:51:28 crc kubenswrapper[4874]: I0217 16:51:28.034216 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="14f801991ed13f0df4772429c4adbfe835f5a9746c0f47d04396777f4307053f" exitCode=0 Feb 17 16:51:28 crc kubenswrapper[4874]: I0217 16:51:28.034311 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"14f801991ed13f0df4772429c4adbfe835f5a9746c0f47d04396777f4307053f"} Feb 17 16:51:28 crc kubenswrapper[4874]: I0217 16:51:28.034743 4874 scope.go:117] "RemoveContainer" containerID="c176f5b5287d4fd95f2c95129bf20f63582440ae6fd611735dc8a4627ac1abf3" Feb 17 16:51:29 crc kubenswrapper[4874]: I0217 16:51:29.050280 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41"} Feb 17 16:51:33 crc kubenswrapper[4874]: E0217 16:51:33.460162 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:51:40 crc kubenswrapper[4874]: E0217 16:51:40.472840 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:51:47 crc kubenswrapper[4874]: E0217 16:51:47.458613 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:51:53 crc kubenswrapper[4874]: E0217 16:51:53.461513 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:52:01 crc kubenswrapper[4874]: E0217 16:52:01.459504 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:52:05 crc kubenswrapper[4874]: E0217 16:52:05.460811 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:52:12 crc kubenswrapper[4874]: E0217 16:52:12.459640 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:52:19 crc kubenswrapper[4874]: E0217 16:52:19.461186 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:52:26 crc kubenswrapper[4874]: E0217 16:52:26.459864 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:52:30 crc kubenswrapper[4874]: E0217 16:52:30.477712 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:52:41 crc kubenswrapper[4874]: E0217 16:52:41.460914 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:52:45 crc kubenswrapper[4874]: E0217 16:52:45.460418 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:52:56 crc kubenswrapper[4874]: E0217 16:52:56.461649 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:52:57 crc kubenswrapper[4874]: E0217 16:52:57.459276 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:53:09 crc kubenswrapper[4874]: E0217 16:53:09.462502 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:53:11 crc kubenswrapper[4874]: E0217 16:53:11.461928 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:53:20 crc kubenswrapper[4874]: E0217 16:53:20.479129 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:53:25 crc kubenswrapper[4874]: E0217 16:53:25.460656 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:53:32 crc kubenswrapper[4874]: E0217 16:53:32.460799 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:53:33 crc kubenswrapper[4874]: I0217 16:53:33.548105 4874 generic.go:334] "Generic (PLEG): container finished" podID="22a145b3-1fbd-43be-9c83-9a04d4506430" containerID="c9b2da86d5b165c9cda4debe4feba01f9799aef900dcbe4ef12edce2ef559820" exitCode=2 Feb 17 16:53:33 crc kubenswrapper[4874]: I0217 16:53:33.548475 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" event={"ID":"22a145b3-1fbd-43be-9c83-9a04d4506430","Type":"ContainerDied","Data":"c9b2da86d5b165c9cda4debe4feba01f9799aef900dcbe4ef12edce2ef559820"} Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.069516 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.168044 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-ssh-key-openstack-edpm-ipam\") pod \"22a145b3-1fbd-43be-9c83-9a04d4506430\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.168146 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-inventory\") pod \"22a145b3-1fbd-43be-9c83-9a04d4506430\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.168284 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n7x4\" (UniqueName: \"kubernetes.io/projected/22a145b3-1fbd-43be-9c83-9a04d4506430-kube-api-access-8n7x4\") pod \"22a145b3-1fbd-43be-9c83-9a04d4506430\" (UID: \"22a145b3-1fbd-43be-9c83-9a04d4506430\") " Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.180954 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a145b3-1fbd-43be-9c83-9a04d4506430-kube-api-access-8n7x4" (OuterVolumeSpecName: "kube-api-access-8n7x4") pod "22a145b3-1fbd-43be-9c83-9a04d4506430" (UID: "22a145b3-1fbd-43be-9c83-9a04d4506430"). InnerVolumeSpecName "kube-api-access-8n7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.205152 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "22a145b3-1fbd-43be-9c83-9a04d4506430" (UID: "22a145b3-1fbd-43be-9c83-9a04d4506430"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.235115 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-inventory" (OuterVolumeSpecName: "inventory") pod "22a145b3-1fbd-43be-9c83-9a04d4506430" (UID: "22a145b3-1fbd-43be-9c83-9a04d4506430"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.270878 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.270920 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22a145b3-1fbd-43be-9c83-9a04d4506430-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.270934 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n7x4\" (UniqueName: \"kubernetes.io/projected/22a145b3-1fbd-43be-9c83-9a04d4506430-kube-api-access-8n7x4\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.601374 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" event={"ID":"22a145b3-1fbd-43be-9c83-9a04d4506430","Type":"ContainerDied","Data":"06020d29ed795e8f7f90e846160ce3091c6a759eb3f94fb5cc9cc76d24fb4d33"} Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.601756 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06020d29ed795e8f7f90e846160ce3091c6a759eb3f94fb5cc9cc76d24fb4d33" Feb 17 16:53:35 crc kubenswrapper[4874]: I0217 16:53:35.601438 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c" Feb 17 16:53:39 crc kubenswrapper[4874]: E0217 16:53:39.462261 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:53:45 crc kubenswrapper[4874]: E0217 16:53:45.460761 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:53:52 crc kubenswrapper[4874]: E0217 16:53:52.464630 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:53:57 crc kubenswrapper[4874]: I0217 16:53:57.724535 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:53:57 crc kubenswrapper[4874]: I0217 16:53:57.724909 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:53:58 crc kubenswrapper[4874]: E0217 16:53:58.460013 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.087246 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f4zlg"] Feb 17 16:54:04 crc kubenswrapper[4874]: E0217 16:54:04.088300 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="extract-content" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.088313 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="extract-content" Feb 17 16:54:04 crc kubenswrapper[4874]: E0217 16:54:04.088336 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a145b3-1fbd-43be-9c83-9a04d4506430" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.088345 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a145b3-1fbd-43be-9c83-9a04d4506430" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:54:04 crc kubenswrapper[4874]: E0217 16:54:04.088379 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="extract-utilities" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.088385 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="extract-utilities" Feb 17 16:54:04 crc kubenswrapper[4874]: E0217 16:54:04.088408 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="registry-server" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.088414 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="registry-server" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.088621 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a145b3-1fbd-43be-9c83-9a04d4506430" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.088648 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9336354d-6c4f-4fd7-a269-0059db00bff1" containerName="registry-server" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.090886 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.108542 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4zlg"] Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.879550 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-utilities\") pod \"community-operators-f4zlg\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.879628 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-catalog-content\") pod \"community-operators-f4zlg\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.879878 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb6mn\" (UniqueName: \"kubernetes.io/projected/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-kube-api-access-hb6mn\") pod \"community-operators-f4zlg\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.982387 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb6mn\" (UniqueName: \"kubernetes.io/projected/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-kube-api-access-hb6mn\") pod \"community-operators-f4zlg\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.982501 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-utilities\") pod \"community-operators-f4zlg\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.982566 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-catalog-content\") pod \"community-operators-f4zlg\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.983116 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-catalog-content\") pod \"community-operators-f4zlg\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:04 crc kubenswrapper[4874]: I0217 16:54:04.985798 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-utilities\") pod \"community-operators-f4zlg\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:05 crc kubenswrapper[4874]: I0217 16:54:05.025377 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb6mn\" (UniqueName: \"kubernetes.io/projected/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-kube-api-access-hb6mn\") pod \"community-operators-f4zlg\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:05 crc kubenswrapper[4874]: I0217 16:54:05.032765 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:05 crc kubenswrapper[4874]: W0217 16:54:05.566354 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc59ce3ed_a6b9_4b04_b5cd_dfbfafd9c4e9.slice/crio-17d01af7fa491b95e83ed4f8c084ac2e2793b305445a2d1943a3155a7fbfc901 WatchSource:0}: Error finding container 17d01af7fa491b95e83ed4f8c084ac2e2793b305445a2d1943a3155a7fbfc901: Status 404 returned error can't find the container with id 17d01af7fa491b95e83ed4f8c084ac2e2793b305445a2d1943a3155a7fbfc901 Feb 17 16:54:05 crc kubenswrapper[4874]: I0217 16:54:05.569640 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4zlg"] Feb 17 16:54:06 crc kubenswrapper[4874]: I0217 16:54:06.007981 4874 generic.go:334] "Generic (PLEG): container finished" podID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerID="179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad" exitCode=0 Feb 17 16:54:06 crc kubenswrapper[4874]: I0217 16:54:06.008063 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zlg" event={"ID":"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9","Type":"ContainerDied","Data":"179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad"} Feb 17 16:54:06 crc kubenswrapper[4874]: I0217 16:54:06.008285 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zlg" event={"ID":"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9","Type":"ContainerStarted","Data":"17d01af7fa491b95e83ed4f8c084ac2e2793b305445a2d1943a3155a7fbfc901"} Feb 17 16:54:07 crc kubenswrapper[4874]: I0217 16:54:07.026578 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zlg" event={"ID":"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9","Type":"ContainerStarted","Data":"8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1"} Feb 17 16:54:07 crc kubenswrapper[4874]: E0217 16:54:07.463657 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:54:09 crc kubenswrapper[4874]: I0217 16:54:09.071056 4874 generic.go:334] "Generic (PLEG): container finished" podID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerID="8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1" exitCode=0 Feb 17 16:54:09 crc kubenswrapper[4874]: I0217 16:54:09.071154 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zlg" event={"ID":"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9","Type":"ContainerDied","Data":"8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1"} Feb 17 16:54:10 crc kubenswrapper[4874]: I0217 16:54:10.086177 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zlg" event={"ID":"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9","Type":"ContainerStarted","Data":"5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe"} Feb 17 16:54:10 crc kubenswrapper[4874]: I0217 16:54:10.132803 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f4zlg" podStartSLOduration=2.633238542 podStartE2EDuration="6.132775425s" podCreationTimestamp="2026-02-17 16:54:04 +0000 UTC" firstStartedPulling="2026-02-17 16:54:06.010905275 +0000 UTC m=+3056.305293876" lastFinishedPulling="2026-02-17 16:54:09.510442158 +0000 UTC m=+3059.804830759" observedRunningTime="2026-02-17 16:54:10.121297781 +0000 UTC m=+3060.415686382" watchObservedRunningTime="2026-02-17 16:54:10.132775425 +0000 UTC m=+3060.427164016" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.038373 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65"] Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.041196 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.047933 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.048067 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.049856 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.051010 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dsqj4" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.060287 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65"] Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.089044 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4hs65\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.089520 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nd6j\" (UniqueName: \"kubernetes.io/projected/4fc34eca-3b52-4650-9c09-3c17befa87d5-kube-api-access-2nd6j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4hs65\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.089842 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4hs65\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.192605 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nd6j\" (UniqueName: \"kubernetes.io/projected/4fc34eca-3b52-4650-9c09-3c17befa87d5-kube-api-access-2nd6j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4hs65\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.192752 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4hs65\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.192812 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4hs65\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.199300 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4hs65\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.199501 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4hs65\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.209639 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nd6j\" (UniqueName: \"kubernetes.io/projected/4fc34eca-3b52-4650-9c09-3c17befa87d5-kube-api-access-2nd6j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4hs65\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:12 crc kubenswrapper[4874]: I0217 16:54:12.364047 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 16:54:13 crc kubenswrapper[4874]: I0217 16:54:13.012265 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65"] Feb 17 16:54:13 crc kubenswrapper[4874]: I0217 16:54:13.121280 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" event={"ID":"4fc34eca-3b52-4650-9c09-3c17befa87d5","Type":"ContainerStarted","Data":"18d799a11554c8430d9483fcc116c56b832fe0bcb4938eef5bbb001321633643"} Feb 17 16:54:13 crc kubenswrapper[4874]: E0217 16:54:13.470186 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:54:14 crc kubenswrapper[4874]: I0217 16:54:14.133577 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" event={"ID":"4fc34eca-3b52-4650-9c09-3c17befa87d5","Type":"ContainerStarted","Data":"58c8e708626f34b76fdd8e51890f7cbd12b4ddf809fcb7c6c83b5a7ae57840f8"} Feb 17 16:54:14 crc kubenswrapper[4874]: I0217 16:54:14.161999 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" podStartSLOduration=1.720634806 podStartE2EDuration="2.16198155s" podCreationTimestamp="2026-02-17 16:54:12 +0000 UTC" firstStartedPulling="2026-02-17 16:54:13.030401957 +0000 UTC m=+3063.324790518" lastFinishedPulling="2026-02-17 16:54:13.471748681 +0000 UTC m=+3063.766137262" observedRunningTime="2026-02-17 16:54:14.154406502 +0000 UTC m=+3064.448795073" watchObservedRunningTime="2026-02-17 16:54:14.16198155 +0000 UTC m=+3064.456370121" Feb 17 16:54:15 crc kubenswrapper[4874]: I0217 16:54:15.033516 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:15 crc kubenswrapper[4874]: I0217 16:54:15.033577 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:15 crc kubenswrapper[4874]: I0217 16:54:15.081183 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:15 crc kubenswrapper[4874]: I0217 16:54:15.214568 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:15 crc kubenswrapper[4874]: I0217 16:54:15.325303 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4zlg"] Feb 17 16:54:17 crc kubenswrapper[4874]: I0217 16:54:17.167109 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f4zlg" podUID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerName="registry-server" containerID="cri-o://5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe" gracePeriod=2 Feb 17 16:54:17 crc kubenswrapper[4874]: I0217 16:54:17.836763 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:17 crc kubenswrapper[4874]: I0217 16:54:17.953702 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb6mn\" (UniqueName: \"kubernetes.io/projected/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-kube-api-access-hb6mn\") pod \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " Feb 17 16:54:17 crc kubenswrapper[4874]: I0217 16:54:17.954044 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-catalog-content\") pod \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " Feb 17 16:54:17 crc kubenswrapper[4874]: I0217 16:54:17.954276 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-utilities\") pod \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\" (UID: \"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9\") " Feb 17 16:54:17 crc kubenswrapper[4874]: I0217 16:54:17.955300 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-utilities" (OuterVolumeSpecName: "utilities") pod "c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" (UID: "c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:54:17 crc kubenswrapper[4874]: I0217 16:54:17.955464 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:54:17 crc kubenswrapper[4874]: I0217 16:54:17.961264 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-kube-api-access-hb6mn" (OuterVolumeSpecName: "kube-api-access-hb6mn") pod "c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" (UID: "c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9"). InnerVolumeSpecName "kube-api-access-hb6mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.002883 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" (UID: "c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.057366 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.057401 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb6mn\" (UniqueName: \"kubernetes.io/projected/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9-kube-api-access-hb6mn\") on node \"crc\" DevicePath \"\"" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.184418 4874 generic.go:334] "Generic (PLEG): container finished" podID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerID="5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe" exitCode=0 Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.184492 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zlg" event={"ID":"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9","Type":"ContainerDied","Data":"5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe"} Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.184575 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zlg" event={"ID":"c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9","Type":"ContainerDied","Data":"17d01af7fa491b95e83ed4f8c084ac2e2793b305445a2d1943a3155a7fbfc901"} Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.184608 4874 scope.go:117] "RemoveContainer" containerID="5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.185303 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4zlg" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.215436 4874 scope.go:117] "RemoveContainer" containerID="8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.239532 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4zlg"] Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.250415 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f4zlg"] Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.273110 4874 scope.go:117] "RemoveContainer" containerID="179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.324139 4874 scope.go:117] "RemoveContainer" containerID="5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe" Feb 17 16:54:18 crc kubenswrapper[4874]: E0217 16:54:18.324630 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe\": container with ID starting with 5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe not found: ID does not exist" containerID="5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.324683 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe"} err="failed to get container status \"5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe\": rpc error: code = NotFound desc = could not find container \"5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe\": container with ID starting with 5f2ffe51b404066bff41381c8e2b5d661d6968d9e5a7f47973674ef685d999fe not found: ID does not exist" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.324710 4874 scope.go:117] "RemoveContainer" containerID="8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1" Feb 17 16:54:18 crc kubenswrapper[4874]: E0217 16:54:18.325164 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1\": container with ID starting with 8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1 not found: ID does not exist" containerID="8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.325193 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1"} err="failed to get container status \"8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1\": rpc error: code = NotFound desc = could not find container \"8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1\": container with ID starting with 8e4fe3e79b96951bf92a1147431a20947a4e8fa903c70eca5dcef20ef799e2e1 not found: ID does not exist" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.325208 4874 scope.go:117] "RemoveContainer" containerID="179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad" Feb 17 16:54:18 crc kubenswrapper[4874]: E0217 16:54:18.325463 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad\": container with ID starting with 179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad not found: ID does not exist" containerID="179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.325523 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad"} err="failed to get container status \"179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad\": rpc error: code = NotFound desc = could not find container \"179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad\": container with ID starting with 179c374164f4178aac4b5005e753b75c5686a49cb5d614179f778a7bdf9ae2ad not found: ID does not exist" Feb 17 16:54:18 crc kubenswrapper[4874]: E0217 16:54:18.459789 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:54:18 crc kubenswrapper[4874]: I0217 16:54:18.481592 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" path="/var/lib/kubelet/pods/c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9/volumes" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.363599 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mgmcq"] Feb 17 16:54:23 crc kubenswrapper[4874]: E0217 16:54:23.364784 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerName="extract-content" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.364800 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerName="extract-content" Feb 17 16:54:23 crc kubenswrapper[4874]: E0217 16:54:23.364812 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerName="registry-server" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.364819 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerName="registry-server" Feb 17 16:54:23 crc kubenswrapper[4874]: E0217 16:54:23.364845 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerName="extract-utilities" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.364854 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerName="extract-utilities" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.365166 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59ce3ed-a6b9-4b04-b5cd-dfbfafd9c4e9" containerName="registry-server" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.367192 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.385027 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgmcq"] Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.514182 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-catalog-content\") pod \"redhat-marketplace-mgmcq\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.514286 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-utilities\") pod \"redhat-marketplace-mgmcq\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.514332 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blt9\" (UniqueName: \"kubernetes.io/projected/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-kube-api-access-7blt9\") pod \"redhat-marketplace-mgmcq\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.616246 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-catalog-content\") pod \"redhat-marketplace-mgmcq\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.616444 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-utilities\") pod \"redhat-marketplace-mgmcq\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.616513 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7blt9\" (UniqueName: \"kubernetes.io/projected/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-kube-api-access-7blt9\") pod \"redhat-marketplace-mgmcq\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.616847 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-catalog-content\") pod \"redhat-marketplace-mgmcq\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.618770 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-utilities\") pod \"redhat-marketplace-mgmcq\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.653639 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7blt9\" (UniqueName: \"kubernetes.io/projected/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-kube-api-access-7blt9\") pod \"redhat-marketplace-mgmcq\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:23 crc kubenswrapper[4874]: I0217 16:54:23.712293 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:24 crc kubenswrapper[4874]: I0217 16:54:24.281884 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgmcq"] Feb 17 16:54:25 crc kubenswrapper[4874]: I0217 16:54:25.293416 4874 generic.go:334] "Generic (PLEG): container finished" podID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerID="78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f" exitCode=0 Feb 17 16:54:25 crc kubenswrapper[4874]: I0217 16:54:25.293672 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgmcq" event={"ID":"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356","Type":"ContainerDied","Data":"78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f"} Feb 17 16:54:25 crc kubenswrapper[4874]: I0217 16:54:25.293839 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgmcq" event={"ID":"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356","Type":"ContainerStarted","Data":"81f8fb8d4863283b71b61d8d7b9946959c0aabb56a2b5b9caee4fdc41f931a69"} Feb 17 16:54:26 crc kubenswrapper[4874]: I0217 16:54:26.310438 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgmcq" event={"ID":"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356","Type":"ContainerStarted","Data":"ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53"} Feb 17 16:54:27 crc kubenswrapper[4874]: I0217 16:54:27.332454 4874 generic.go:334] "Generic (PLEG): container finished" podID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerID="ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53" exitCode=0 Feb 17 16:54:27 crc kubenswrapper[4874]: I0217 16:54:27.332618 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgmcq" event={"ID":"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356","Type":"ContainerDied","Data":"ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53"} Feb 17 16:54:27 crc kubenswrapper[4874]: I0217 16:54:27.337880 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:54:27 crc kubenswrapper[4874]: E0217 16:54:27.461186 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:54:27 crc kubenswrapper[4874]: I0217 16:54:27.724694 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:27 crc kubenswrapper[4874]: I0217 16:54:27.724751 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:28 crc kubenswrapper[4874]: I0217 16:54:28.348163 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgmcq" event={"ID":"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356","Type":"ContainerStarted","Data":"7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39"} Feb 17 16:54:28 crc kubenswrapper[4874]: I0217 16:54:28.380883 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mgmcq" podStartSLOduration=2.964504731 podStartE2EDuration="5.380864322s" podCreationTimestamp="2026-02-17 16:54:23 +0000 UTC" firstStartedPulling="2026-02-17 16:54:25.296837241 +0000 UTC m=+3075.591225842" lastFinishedPulling="2026-02-17 16:54:27.713196872 +0000 UTC m=+3078.007585433" observedRunningTime="2026-02-17 16:54:28.368392833 +0000 UTC m=+3078.662781394" watchObservedRunningTime="2026-02-17 16:54:28.380864322 +0000 UTC m=+3078.675252883" Feb 17 16:54:30 crc kubenswrapper[4874]: E0217 16:54:30.595709 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:54:30 crc kubenswrapper[4874]: E0217 16:54:30.595905 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:54:30 crc kubenswrapper[4874]: E0217 16:54:30.596032 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:54:30 crc kubenswrapper[4874]: E0217 16:54:30.597369 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:54:33 crc kubenswrapper[4874]: I0217 16:54:33.712745 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:33 crc kubenswrapper[4874]: I0217 16:54:33.713864 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:33 crc kubenswrapper[4874]: I0217 16:54:33.760989 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:34 crc kubenswrapper[4874]: I0217 16:54:34.479899 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:35 crc kubenswrapper[4874]: I0217 16:54:35.003924 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgmcq"] Feb 17 16:54:36 crc kubenswrapper[4874]: I0217 16:54:36.439968 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mgmcq" podUID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerName="registry-server" containerID="cri-o://7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39" gracePeriod=2 Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.029751 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.102276 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7blt9\" (UniqueName: \"kubernetes.io/projected/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-kube-api-access-7blt9\") pod \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.102658 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-catalog-content\") pod \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.102761 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-utilities\") pod \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\" (UID: \"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356\") " Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.104006 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-utilities" (OuterVolumeSpecName: "utilities") pod "155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" (UID: "155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.116385 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-kube-api-access-7blt9" (OuterVolumeSpecName: "kube-api-access-7blt9") pod "155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" (UID: "155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356"). InnerVolumeSpecName "kube-api-access-7blt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.131066 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" (UID: "155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.205220 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7blt9\" (UniqueName: \"kubernetes.io/projected/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-kube-api-access-7blt9\") on node \"crc\" DevicePath \"\"" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.205254 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.205264 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.452771 4874 generic.go:334] "Generic (PLEG): container finished" podID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerID="7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39" exitCode=0 Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.452821 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgmcq" event={"ID":"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356","Type":"ContainerDied","Data":"7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39"} Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.452852 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgmcq" event={"ID":"155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356","Type":"ContainerDied","Data":"81f8fb8d4863283b71b61d8d7b9946959c0aabb56a2b5b9caee4fdc41f931a69"} Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.452849 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgmcq" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.452872 4874 scope.go:117] "RemoveContainer" containerID="7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.486342 4874 scope.go:117] "RemoveContainer" containerID="ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.493370 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgmcq"] Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.512748 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgmcq"] Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.522389 4874 scope.go:117] "RemoveContainer" containerID="78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.577505 4874 scope.go:117] "RemoveContainer" containerID="7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39" Feb 17 16:54:37 crc kubenswrapper[4874]: E0217 16:54:37.578052 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39\": container with ID starting with 7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39 not found: ID does not exist" containerID="7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.578116 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39"} err="failed to get container status \"7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39\": rpc error: code = NotFound desc = could not find container \"7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39\": container with ID starting with 7a2e2a09a64112387022530138e3f81f1eea9139328dc9b06c7da1bd230c0f39 not found: ID does not exist" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.578140 4874 scope.go:117] "RemoveContainer" containerID="ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53" Feb 17 16:54:37 crc kubenswrapper[4874]: E0217 16:54:37.578541 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53\": container with ID starting with ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53 not found: ID does not exist" containerID="ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.578584 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53"} err="failed to get container status \"ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53\": rpc error: code = NotFound desc = could not find container \"ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53\": container with ID starting with ad0be811a64776b1981738c9b0b5ffc54d31089a3401891a8a6e8c3414f85a53 not found: ID does not exist" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.578597 4874 scope.go:117] "RemoveContainer" containerID="78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f" Feb 17 16:54:37 crc kubenswrapper[4874]: E0217 16:54:37.578910 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f\": container with ID starting with 78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f not found: ID does not exist" containerID="78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f" Feb 17 16:54:37 crc kubenswrapper[4874]: I0217 16:54:37.578944 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f"} err="failed to get container status \"78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f\": rpc error: code = NotFound desc = could not find container \"78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f\": container with ID starting with 78f0d128af242e0495b63fe638b892a562c644d89abddb02ec9bd74d72363f8f not found: ID does not exist" Feb 17 16:54:38 crc kubenswrapper[4874]: I0217 16:54:38.472923 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" path="/var/lib/kubelet/pods/155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356/volumes" Feb 17 16:54:41 crc kubenswrapper[4874]: E0217 16:54:41.564324 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:54:41 crc kubenswrapper[4874]: E0217 16:54:41.564936 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:54:41 crc kubenswrapper[4874]: E0217 16:54:41.565100 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:54:41 crc kubenswrapper[4874]: E0217 16:54:41.566289 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:54:44 crc kubenswrapper[4874]: E0217 16:54:44.460750 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:54:56 crc kubenswrapper[4874]: E0217 16:54:56.459660 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:54:57 crc kubenswrapper[4874]: I0217 16:54:57.724791 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:57 crc kubenswrapper[4874]: I0217 16:54:57.725126 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:57 crc kubenswrapper[4874]: I0217 16:54:57.725174 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 16:54:57 crc kubenswrapper[4874]: I0217 16:54:57.726111 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:54:57 crc kubenswrapper[4874]: I0217 16:54:57.726166 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" gracePeriod=600 Feb 17 16:54:57 crc kubenswrapper[4874]: E0217 16:54:57.863747 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:54:58 crc kubenswrapper[4874]: E0217 16:54:58.461530 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:54:58 crc kubenswrapper[4874]: I0217 16:54:58.711818 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" exitCode=0 Feb 17 16:54:58 crc kubenswrapper[4874]: I0217 16:54:58.711878 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41"} Feb 17 16:54:58 crc kubenswrapper[4874]: I0217 16:54:58.711968 4874 scope.go:117] "RemoveContainer" containerID="14f801991ed13f0df4772429c4adbfe835f5a9746c0f47d04396777f4307053f" Feb 17 16:54:58 crc kubenswrapper[4874]: I0217 16:54:58.713100 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:54:58 crc kubenswrapper[4874]: E0217 16:54:58.713472 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:55:09 crc kubenswrapper[4874]: E0217 16:55:09.460682 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:55:11 crc kubenswrapper[4874]: E0217 16:55:11.458771 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:55:12 crc kubenswrapper[4874]: I0217 16:55:12.458198 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:55:12 crc kubenswrapper[4874]: E0217 16:55:12.458723 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:55:22 crc kubenswrapper[4874]: E0217 16:55:22.459633 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:55:22 crc kubenswrapper[4874]: E0217 16:55:22.459812 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:55:26 crc kubenswrapper[4874]: I0217 16:55:26.458248 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:55:26 crc kubenswrapper[4874]: E0217 16:55:26.459292 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:55:33 crc kubenswrapper[4874]: E0217 16:55:33.460119 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:55:33 crc kubenswrapper[4874]: E0217 16:55:33.460217 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:55:41 crc kubenswrapper[4874]: I0217 16:55:41.458499 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:55:41 crc kubenswrapper[4874]: E0217 16:55:41.459684 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:55:45 crc kubenswrapper[4874]: E0217 16:55:45.460713 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:55:46 crc kubenswrapper[4874]: E0217 16:55:46.459909 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:55:55 crc kubenswrapper[4874]: I0217 16:55:55.459441 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:55:55 crc kubenswrapper[4874]: E0217 16:55:55.460559 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:55:57 crc kubenswrapper[4874]: E0217 16:55:57.462826 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:55:59 crc kubenswrapper[4874]: E0217 16:55:59.458967 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:56:07 crc kubenswrapper[4874]: I0217 16:56:07.457814 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:56:07 crc kubenswrapper[4874]: E0217 16:56:07.459054 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:56:10 crc kubenswrapper[4874]: E0217 16:56:10.476846 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:56:11 crc kubenswrapper[4874]: E0217 16:56:11.460881 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:56:18 crc kubenswrapper[4874]: I0217 16:56:18.458503 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:56:18 crc kubenswrapper[4874]: E0217 16:56:18.459643 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:56:21 crc kubenswrapper[4874]: E0217 16:56:21.460843 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.066490 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zx7tx"] Feb 17 16:56:24 crc kubenswrapper[4874]: E0217 16:56:24.067979 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerName="registry-server" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.068049 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerName="registry-server" Feb 17 16:56:24 crc kubenswrapper[4874]: E0217 16:56:24.068208 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerName="extract-utilities" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.068273 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerName="extract-utilities" Feb 17 16:56:24 crc kubenswrapper[4874]: E0217 16:56:24.068335 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerName="extract-content" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.068395 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerName="extract-content" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.068713 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="155c0fb4-55f7-4d6e-a2ac-dba7b9cd3356" containerName="registry-server" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.070431 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.088016 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-utilities\") pod \"certified-operators-zx7tx\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.088121 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-catalog-content\") pod \"certified-operators-zx7tx\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.088707 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kst7x\" (UniqueName: \"kubernetes.io/projected/a7a95dff-118d-4545-b9b9-75a9ad91ab33-kube-api-access-kst7x\") pod \"certified-operators-zx7tx\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.095399 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zx7tx"] Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.191110 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kst7x\" (UniqueName: \"kubernetes.io/projected/a7a95dff-118d-4545-b9b9-75a9ad91ab33-kube-api-access-kst7x\") pod \"certified-operators-zx7tx\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.191236 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-utilities\") pod \"certified-operators-zx7tx\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.191305 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-catalog-content\") pod \"certified-operators-zx7tx\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.191781 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-utilities\") pod \"certified-operators-zx7tx\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.191925 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-catalog-content\") pod \"certified-operators-zx7tx\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.229247 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kst7x\" (UniqueName: \"kubernetes.io/projected/a7a95dff-118d-4545-b9b9-75a9ad91ab33-kube-api-access-kst7x\") pod \"certified-operators-zx7tx\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.392263 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:24 crc kubenswrapper[4874]: I0217 16:56:24.923436 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zx7tx"] Feb 17 16:56:25 crc kubenswrapper[4874]: E0217 16:56:25.458756 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:56:25 crc kubenswrapper[4874]: I0217 16:56:25.753381 4874 generic.go:334] "Generic (PLEG): container finished" podID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerID="bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e" exitCode=0 Feb 17 16:56:25 crc kubenswrapper[4874]: I0217 16:56:25.753667 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx7tx" event={"ID":"a7a95dff-118d-4545-b9b9-75a9ad91ab33","Type":"ContainerDied","Data":"bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e"} Feb 17 16:56:25 crc kubenswrapper[4874]: I0217 16:56:25.753775 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx7tx" event={"ID":"a7a95dff-118d-4545-b9b9-75a9ad91ab33","Type":"ContainerStarted","Data":"5704380639b1a95dd7b01ed8eee6735b743ff1fc14e1cd78b41c3a6d0631a7ae"} Feb 17 16:56:26 crc kubenswrapper[4874]: I0217 16:56:26.771840 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx7tx" event={"ID":"a7a95dff-118d-4545-b9b9-75a9ad91ab33","Type":"ContainerStarted","Data":"b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f"} Feb 17 16:56:28 crc kubenswrapper[4874]: I0217 16:56:28.791955 4874 generic.go:334] "Generic (PLEG): container finished" podID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerID="b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f" exitCode=0 Feb 17 16:56:28 crc kubenswrapper[4874]: I0217 16:56:28.792238 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx7tx" event={"ID":"a7a95dff-118d-4545-b9b9-75a9ad91ab33","Type":"ContainerDied","Data":"b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f"} Feb 17 16:56:29 crc kubenswrapper[4874]: I0217 16:56:29.803188 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx7tx" event={"ID":"a7a95dff-118d-4545-b9b9-75a9ad91ab33","Type":"ContainerStarted","Data":"f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130"} Feb 17 16:56:29 crc kubenswrapper[4874]: I0217 16:56:29.822571 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zx7tx" podStartSLOduration=2.308966319 podStartE2EDuration="5.822553921s" podCreationTimestamp="2026-02-17 16:56:24 +0000 UTC" firstStartedPulling="2026-02-17 16:56:25.755756515 +0000 UTC m=+3196.050145096" lastFinishedPulling="2026-02-17 16:56:29.269344127 +0000 UTC m=+3199.563732698" observedRunningTime="2026-02-17 16:56:29.81805593 +0000 UTC m=+3200.112444501" watchObservedRunningTime="2026-02-17 16:56:29.822553921 +0000 UTC m=+3200.116942482" Feb 17 16:56:31 crc kubenswrapper[4874]: I0217 16:56:31.457917 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:56:31 crc kubenswrapper[4874]: E0217 16:56:31.458718 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:56:34 crc kubenswrapper[4874]: I0217 16:56:34.392948 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:34 crc kubenswrapper[4874]: I0217 16:56:34.394581 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:34 crc kubenswrapper[4874]: I0217 16:56:34.446750 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:34 crc kubenswrapper[4874]: I0217 16:56:34.917020 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:34 crc kubenswrapper[4874]: I0217 16:56:34.972374 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zx7tx"] Feb 17 16:56:36 crc kubenswrapper[4874]: E0217 16:56:36.461187 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:56:36 crc kubenswrapper[4874]: E0217 16:56:36.461292 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:56:36 crc kubenswrapper[4874]: I0217 16:56:36.894749 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zx7tx" podUID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerName="registry-server" containerID="cri-o://f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130" gracePeriod=2 Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.540822 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.651274 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-utilities\") pod \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.651437 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kst7x\" (UniqueName: \"kubernetes.io/projected/a7a95dff-118d-4545-b9b9-75a9ad91ab33-kube-api-access-kst7x\") pod \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.651646 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-catalog-content\") pod \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\" (UID: \"a7a95dff-118d-4545-b9b9-75a9ad91ab33\") " Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.661294 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a95dff-118d-4545-b9b9-75a9ad91ab33-kube-api-access-kst7x" (OuterVolumeSpecName: "kube-api-access-kst7x") pod "a7a95dff-118d-4545-b9b9-75a9ad91ab33" (UID: "a7a95dff-118d-4545-b9b9-75a9ad91ab33"). InnerVolumeSpecName "kube-api-access-kst7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.666779 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-utilities" (OuterVolumeSpecName: "utilities") pod "a7a95dff-118d-4545-b9b9-75a9ad91ab33" (UID: "a7a95dff-118d-4545-b9b9-75a9ad91ab33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.703488 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7a95dff-118d-4545-b9b9-75a9ad91ab33" (UID: "a7a95dff-118d-4545-b9b9-75a9ad91ab33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.754755 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kst7x\" (UniqueName: \"kubernetes.io/projected/a7a95dff-118d-4545-b9b9-75a9ad91ab33-kube-api-access-kst7x\") on node \"crc\" DevicePath \"\"" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.754786 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.754796 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a95dff-118d-4545-b9b9-75a9ad91ab33-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.912699 4874 generic.go:334] "Generic (PLEG): container finished" podID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerID="f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130" exitCode=0 Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.912757 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx7tx" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.912779 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx7tx" event={"ID":"a7a95dff-118d-4545-b9b9-75a9ad91ab33","Type":"ContainerDied","Data":"f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130"} Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.913341 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx7tx" event={"ID":"a7a95dff-118d-4545-b9b9-75a9ad91ab33","Type":"ContainerDied","Data":"5704380639b1a95dd7b01ed8eee6735b743ff1fc14e1cd78b41c3a6d0631a7ae"} Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.913361 4874 scope.go:117] "RemoveContainer" containerID="f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.951280 4874 scope.go:117] "RemoveContainer" containerID="b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f" Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.961171 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zx7tx"] Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.972040 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zx7tx"] Feb 17 16:56:37 crc kubenswrapper[4874]: I0217 16:56:37.996179 4874 scope.go:117] "RemoveContainer" containerID="bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e" Feb 17 16:56:38 crc kubenswrapper[4874]: I0217 16:56:38.039894 4874 scope.go:117] "RemoveContainer" containerID="f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130" Feb 17 16:56:38 crc kubenswrapper[4874]: E0217 16:56:38.040742 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130\": container with ID starting with f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130 not found: ID does not exist" containerID="f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130" Feb 17 16:56:38 crc kubenswrapper[4874]: I0217 16:56:38.040882 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130"} err="failed to get container status \"f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130\": rpc error: code = NotFound desc = could not find container \"f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130\": container with ID starting with f180878ba442667b216ca32fe02d081e9530a705395059cff4b11b3a01750130 not found: ID does not exist" Feb 17 16:56:38 crc kubenswrapper[4874]: I0217 16:56:38.040914 4874 scope.go:117] "RemoveContainer" containerID="b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f" Feb 17 16:56:38 crc kubenswrapper[4874]: E0217 16:56:38.041304 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f\": container with ID starting with b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f not found: ID does not exist" containerID="b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f" Feb 17 16:56:38 crc kubenswrapper[4874]: I0217 16:56:38.041418 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f"} err="failed to get container status \"b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f\": rpc error: code = NotFound desc = could not find container \"b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f\": container with ID starting with b7770955f463147a489dbeed061408c323b5e7f0fe2b6860bef7f2185610290f not found: ID does not exist" Feb 17 16:56:38 crc kubenswrapper[4874]: I0217 16:56:38.041521 4874 scope.go:117] "RemoveContainer" containerID="bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e" Feb 17 16:56:38 crc kubenswrapper[4874]: E0217 16:56:38.041856 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e\": container with ID starting with bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e not found: ID does not exist" containerID="bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e" Feb 17 16:56:38 crc kubenswrapper[4874]: I0217 16:56:38.041882 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e"} err="failed to get container status \"bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e\": rpc error: code = NotFound desc = could not find container \"bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e\": container with ID starting with bfd25ada841547375bc6e20ce2da57945940c458a82f062109103518db52dc2e not found: ID does not exist" Feb 17 16:56:38 crc kubenswrapper[4874]: I0217 16:56:38.471493 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" path="/var/lib/kubelet/pods/a7a95dff-118d-4545-b9b9-75a9ad91ab33/volumes" Feb 17 16:56:44 crc kubenswrapper[4874]: I0217 16:56:44.457390 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:56:44 crc kubenswrapper[4874]: E0217 16:56:44.458627 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:56:51 crc kubenswrapper[4874]: E0217 16:56:51.458508 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:56:51 crc kubenswrapper[4874]: E0217 16:56:51.460547 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:56:59 crc kubenswrapper[4874]: I0217 16:56:59.458053 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:56:59 crc kubenswrapper[4874]: E0217 16:56:59.459259 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:57:03 crc kubenswrapper[4874]: E0217 16:57:03.460848 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:57:06 crc kubenswrapper[4874]: E0217 16:57:06.461049 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:57:11 crc kubenswrapper[4874]: I0217 16:57:11.457880 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:57:11 crc kubenswrapper[4874]: E0217 16:57:11.459188 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:57:14 crc kubenswrapper[4874]: E0217 16:57:14.460186 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:57:18 crc kubenswrapper[4874]: E0217 16:57:18.460962 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:57:24 crc kubenswrapper[4874]: I0217 16:57:24.458238 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:57:24 crc kubenswrapper[4874]: E0217 16:57:24.459176 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:57:26 crc kubenswrapper[4874]: E0217 16:57:26.462191 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:57:31 crc kubenswrapper[4874]: E0217 16:57:31.460170 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:57:35 crc kubenswrapper[4874]: I0217 16:57:35.456990 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:57:35 crc kubenswrapper[4874]: E0217 16:57:35.458054 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:57:41 crc kubenswrapper[4874]: E0217 16:57:41.461570 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:57:46 crc kubenswrapper[4874]: E0217 16:57:46.461100 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:57:49 crc kubenswrapper[4874]: I0217 16:57:49.458586 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:57:49 crc kubenswrapper[4874]: E0217 16:57:49.459447 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:57:54 crc kubenswrapper[4874]: E0217 16:57:54.463355 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:58:01 crc kubenswrapper[4874]: E0217 16:58:01.461244 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:58:03 crc kubenswrapper[4874]: I0217 16:58:03.458417 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:58:03 crc kubenswrapper[4874]: E0217 16:58:03.459532 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:58:07 crc kubenswrapper[4874]: E0217 16:58:07.459981 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:58:16 crc kubenswrapper[4874]: I0217 16:58:16.458162 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:58:16 crc kubenswrapper[4874]: E0217 16:58:16.459136 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:58:16 crc kubenswrapper[4874]: E0217 16:58:16.460868 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:58:22 crc kubenswrapper[4874]: E0217 16:58:22.459583 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:58:28 crc kubenswrapper[4874]: I0217 16:58:28.458831 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:58:28 crc kubenswrapper[4874]: E0217 16:58:28.459632 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:58:31 crc kubenswrapper[4874]: E0217 16:58:31.460428 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:58:34 crc kubenswrapper[4874]: E0217 16:58:34.460404 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:58:42 crc kubenswrapper[4874]: E0217 16:58:42.464781 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:58:43 crc kubenswrapper[4874]: I0217 16:58:43.457463 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:58:43 crc kubenswrapper[4874]: E0217 16:58:43.457870 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:58:47 crc kubenswrapper[4874]: E0217 16:58:47.460182 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:58:54 crc kubenswrapper[4874]: E0217 16:58:54.460472 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:58:58 crc kubenswrapper[4874]: I0217 16:58:58.457451 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:58:58 crc kubenswrapper[4874]: E0217 16:58:58.458632 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:58:59 crc kubenswrapper[4874]: E0217 16:58:59.459800 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:59:05 crc kubenswrapper[4874]: E0217 16:59:05.458984 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:59:10 crc kubenswrapper[4874]: I0217 16:59:10.470938 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:59:10 crc kubenswrapper[4874]: E0217 16:59:10.471870 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:59:14 crc kubenswrapper[4874]: E0217 16:59:14.459558 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:59:19 crc kubenswrapper[4874]: E0217 16:59:19.460039 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:59:25 crc kubenswrapper[4874]: I0217 16:59:25.457479 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:59:25 crc kubenswrapper[4874]: E0217 16:59:25.458275 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:59:28 crc kubenswrapper[4874]: E0217 16:59:28.459948 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:59:31 crc kubenswrapper[4874]: I0217 16:59:31.460231 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:59:31 crc kubenswrapper[4874]: E0217 16:59:31.583330 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:59:31 crc kubenswrapper[4874]: E0217 16:59:31.583430 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:59:31 crc kubenswrapper[4874]: E0217 16:59:31.583621 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:59:31 crc kubenswrapper[4874]: E0217 16:59:31.584861 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:59:39 crc kubenswrapper[4874]: I0217 16:59:39.457987 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:59:39 crc kubenswrapper[4874]: E0217 16:59:39.458835 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:59:43 crc kubenswrapper[4874]: E0217 16:59:43.579214 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:59:43 crc kubenswrapper[4874]: E0217 16:59:43.580174 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:59:43 crc kubenswrapper[4874]: E0217 16:59:43.580364 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:59:43 crc kubenswrapper[4874]: E0217 16:59:43.581926 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:59:44 crc kubenswrapper[4874]: E0217 16:59:44.460949 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 16:59:51 crc kubenswrapper[4874]: I0217 16:59:51.457949 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 16:59:51 crc kubenswrapper[4874]: E0217 16:59:51.458826 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 16:59:56 crc kubenswrapper[4874]: E0217 16:59:56.461255 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 16:59:58 crc kubenswrapper[4874]: E0217 16:59:58.459642 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.163487 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579"] Feb 17 17:00:00 crc kubenswrapper[4874]: E0217 17:00:00.164282 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerName="extract-content" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.164297 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerName="extract-content" Feb 17 17:00:00 crc kubenswrapper[4874]: E0217 17:00:00.164310 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerName="extract-utilities" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.164317 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerName="extract-utilities" Feb 17 17:00:00 crc kubenswrapper[4874]: E0217 17:00:00.164345 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.164352 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.164617 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a95dff-118d-4545-b9b9-75a9ad91ab33" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.165462 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.168270 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.168370 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.181321 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579"] Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.292711 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5f65\" (UniqueName: \"kubernetes.io/projected/9e85f3b9-7596-499a-bef9-1e56369f3599-kube-api-access-t5f65\") pod \"collect-profiles-29522460-kz579\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.292921 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e85f3b9-7596-499a-bef9-1e56369f3599-secret-volume\") pod \"collect-profiles-29522460-kz579\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.293227 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e85f3b9-7596-499a-bef9-1e56369f3599-config-volume\") pod \"collect-profiles-29522460-kz579\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.396481 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5f65\" (UniqueName: \"kubernetes.io/projected/9e85f3b9-7596-499a-bef9-1e56369f3599-kube-api-access-t5f65\") pod \"collect-profiles-29522460-kz579\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.396614 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e85f3b9-7596-499a-bef9-1e56369f3599-secret-volume\") pod \"collect-profiles-29522460-kz579\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.396721 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e85f3b9-7596-499a-bef9-1e56369f3599-config-volume\") pod \"collect-profiles-29522460-kz579\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.397694 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e85f3b9-7596-499a-bef9-1e56369f3599-config-volume\") pod \"collect-profiles-29522460-kz579\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.402866 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e85f3b9-7596-499a-bef9-1e56369f3599-secret-volume\") pod \"collect-profiles-29522460-kz579\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.414937 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5f65\" (UniqueName: \"kubernetes.io/projected/9e85f3b9-7596-499a-bef9-1e56369f3599-kube-api-access-t5f65\") pod \"collect-profiles-29522460-kz579\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:00 crc kubenswrapper[4874]: I0217 17:00:00.511636 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:01 crc kubenswrapper[4874]: I0217 17:00:01.059216 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579"] Feb 17 17:00:01 crc kubenswrapper[4874]: I0217 17:00:01.136043 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" event={"ID":"9e85f3b9-7596-499a-bef9-1e56369f3599","Type":"ContainerStarted","Data":"fa923b9ddca77072406b2d0ec9c41d3a401a10236409370c1dda7182d26f7518"} Feb 17 17:00:02 crc kubenswrapper[4874]: I0217 17:00:02.149685 4874 generic.go:334] "Generic (PLEG): container finished" podID="9e85f3b9-7596-499a-bef9-1e56369f3599" containerID="dd0e10e59353bdf089bad4a5150a3b271200df46a69231e0518155a26a9c3bc3" exitCode=0 Feb 17 17:00:02 crc kubenswrapper[4874]: I0217 17:00:02.150225 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" event={"ID":"9e85f3b9-7596-499a-bef9-1e56369f3599","Type":"ContainerDied","Data":"dd0e10e59353bdf089bad4a5150a3b271200df46a69231e0518155a26a9c3bc3"} Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.806842 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.864913 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e85f3b9-7596-499a-bef9-1e56369f3599-config-volume\") pod \"9e85f3b9-7596-499a-bef9-1e56369f3599\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.865836 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e85f3b9-7596-499a-bef9-1e56369f3599-config-volume" (OuterVolumeSpecName: "config-volume") pod "9e85f3b9-7596-499a-bef9-1e56369f3599" (UID: "9e85f3b9-7596-499a-bef9-1e56369f3599"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.866032 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5f65\" (UniqueName: \"kubernetes.io/projected/9e85f3b9-7596-499a-bef9-1e56369f3599-kube-api-access-t5f65\") pod \"9e85f3b9-7596-499a-bef9-1e56369f3599\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.866194 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e85f3b9-7596-499a-bef9-1e56369f3599-secret-volume\") pod \"9e85f3b9-7596-499a-bef9-1e56369f3599\" (UID: \"9e85f3b9-7596-499a-bef9-1e56369f3599\") " Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.866783 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e85f3b9-7596-499a-bef9-1e56369f3599-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.872911 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e85f3b9-7596-499a-bef9-1e56369f3599-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9e85f3b9-7596-499a-bef9-1e56369f3599" (UID: "9e85f3b9-7596-499a-bef9-1e56369f3599"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.872987 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e85f3b9-7596-499a-bef9-1e56369f3599-kube-api-access-t5f65" (OuterVolumeSpecName: "kube-api-access-t5f65") pod "9e85f3b9-7596-499a-bef9-1e56369f3599" (UID: "9e85f3b9-7596-499a-bef9-1e56369f3599"). InnerVolumeSpecName "kube-api-access-t5f65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.969972 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5f65\" (UniqueName: \"kubernetes.io/projected/9e85f3b9-7596-499a-bef9-1e56369f3599-kube-api-access-t5f65\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:03 crc kubenswrapper[4874]: I0217 17:00:03.970007 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e85f3b9-7596-499a-bef9-1e56369f3599-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4874]: I0217 17:00:04.390533 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" event={"ID":"9e85f3b9-7596-499a-bef9-1e56369f3599","Type":"ContainerDied","Data":"fa923b9ddca77072406b2d0ec9c41d3a401a10236409370c1dda7182d26f7518"} Feb 17 17:00:04 crc kubenswrapper[4874]: I0217 17:00:04.390853 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa923b9ddca77072406b2d0ec9c41d3a401a10236409370c1dda7182d26f7518" Feb 17 17:00:04 crc kubenswrapper[4874]: I0217 17:00:04.390916 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-kz579" Feb 17 17:00:04 crc kubenswrapper[4874]: I0217 17:00:04.887564 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d"] Feb 17 17:00:04 crc kubenswrapper[4874]: I0217 17:00:04.901095 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-79m2d"] Feb 17 17:00:06 crc kubenswrapper[4874]: I0217 17:00:06.458623 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 17:00:06 crc kubenswrapper[4874]: I0217 17:00:06.482015 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd54bcd1-35a2-4582-adab-a0926f977ae8" path="/var/lib/kubelet/pods/bd54bcd1-35a2-4582-adab-a0926f977ae8/volumes" Feb 17 17:00:07 crc kubenswrapper[4874]: I0217 17:00:07.426331 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"4382ec70f731fe685b847e4b318346d4edba77d46bbd94f1ed3eb84b2e0ff786"} Feb 17 17:00:07 crc kubenswrapper[4874]: E0217 17:00:07.458499 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:00:09 crc kubenswrapper[4874]: I0217 17:00:09.874516 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7r9gg"] Feb 17 17:00:09 crc kubenswrapper[4874]: E0217 17:00:09.875808 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e85f3b9-7596-499a-bef9-1e56369f3599" containerName="collect-profiles" Feb 17 17:00:09 crc kubenswrapper[4874]: I0217 17:00:09.875827 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e85f3b9-7596-499a-bef9-1e56369f3599" containerName="collect-profiles" Feb 17 17:00:09 crc kubenswrapper[4874]: I0217 17:00:09.876493 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e85f3b9-7596-499a-bef9-1e56369f3599" containerName="collect-profiles" Feb 17 17:00:09 crc kubenswrapper[4874]: I0217 17:00:09.879259 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:09 crc kubenswrapper[4874]: I0217 17:00:09.892767 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7r9gg"] Feb 17 17:00:09 crc kubenswrapper[4874]: I0217 17:00:09.912126 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-utilities\") pod \"redhat-operators-7r9gg\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:09 crc kubenswrapper[4874]: I0217 17:00:09.912209 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-catalog-content\") pod \"redhat-operators-7r9gg\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:09 crc kubenswrapper[4874]: I0217 17:00:09.912272 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbd4n\" (UniqueName: \"kubernetes.io/projected/067b1563-08e1-4302-85e5-de5b52d71661-kube-api-access-jbd4n\") pod \"redhat-operators-7r9gg\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:10 crc kubenswrapper[4874]: I0217 17:00:10.014167 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-utilities\") pod \"redhat-operators-7r9gg\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:10 crc kubenswrapper[4874]: I0217 17:00:10.014257 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-catalog-content\") pod \"redhat-operators-7r9gg\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:10 crc kubenswrapper[4874]: I0217 17:00:10.014295 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbd4n\" (UniqueName: \"kubernetes.io/projected/067b1563-08e1-4302-85e5-de5b52d71661-kube-api-access-jbd4n\") pod \"redhat-operators-7r9gg\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:10 crc kubenswrapper[4874]: I0217 17:00:10.015186 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-utilities\") pod \"redhat-operators-7r9gg\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:10 crc kubenswrapper[4874]: I0217 17:00:10.015400 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-catalog-content\") pod \"redhat-operators-7r9gg\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:10 crc kubenswrapper[4874]: I0217 17:00:10.040862 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbd4n\" (UniqueName: \"kubernetes.io/projected/067b1563-08e1-4302-85e5-de5b52d71661-kube-api-access-jbd4n\") pod \"redhat-operators-7r9gg\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:10 crc kubenswrapper[4874]: I0217 17:00:10.207718 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:10 crc kubenswrapper[4874]: W0217 17:00:10.886522 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod067b1563_08e1_4302_85e5_de5b52d71661.slice/crio-0b12f6bdb3411743c71426b4da01698ef20155b768dd76bfed78c794f45b06f8 WatchSource:0}: Error finding container 0b12f6bdb3411743c71426b4da01698ef20155b768dd76bfed78c794f45b06f8: Status 404 returned error can't find the container with id 0b12f6bdb3411743c71426b4da01698ef20155b768dd76bfed78c794f45b06f8 Feb 17 17:00:10 crc kubenswrapper[4874]: I0217 17:00:10.892726 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7r9gg"] Feb 17 17:00:11 crc kubenswrapper[4874]: I0217 17:00:11.511332 4874 generic.go:334] "Generic (PLEG): container finished" podID="067b1563-08e1-4302-85e5-de5b52d71661" containerID="4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734" exitCode=0 Feb 17 17:00:11 crc kubenswrapper[4874]: I0217 17:00:11.511385 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9gg" event={"ID":"067b1563-08e1-4302-85e5-de5b52d71661","Type":"ContainerDied","Data":"4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734"} Feb 17 17:00:11 crc kubenswrapper[4874]: I0217 17:00:11.511617 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9gg" event={"ID":"067b1563-08e1-4302-85e5-de5b52d71661","Type":"ContainerStarted","Data":"0b12f6bdb3411743c71426b4da01698ef20155b768dd76bfed78c794f45b06f8"} Feb 17 17:00:12 crc kubenswrapper[4874]: E0217 17:00:12.461348 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:00:12 crc kubenswrapper[4874]: I0217 17:00:12.522177 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9gg" event={"ID":"067b1563-08e1-4302-85e5-de5b52d71661","Type":"ContainerStarted","Data":"24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4"} Feb 17 17:00:19 crc kubenswrapper[4874]: I0217 17:00:19.592618 4874 generic.go:334] "Generic (PLEG): container finished" podID="067b1563-08e1-4302-85e5-de5b52d71661" containerID="24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4" exitCode=0 Feb 17 17:00:19 crc kubenswrapper[4874]: I0217 17:00:19.592787 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9gg" event={"ID":"067b1563-08e1-4302-85e5-de5b52d71661","Type":"ContainerDied","Data":"24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4"} Feb 17 17:00:20 crc kubenswrapper[4874]: I0217 17:00:20.604057 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9gg" event={"ID":"067b1563-08e1-4302-85e5-de5b52d71661","Type":"ContainerStarted","Data":"52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0"} Feb 17 17:00:20 crc kubenswrapper[4874]: I0217 17:00:20.624860 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7r9gg" podStartSLOduration=3.132563689 podStartE2EDuration="11.624839646s" podCreationTimestamp="2026-02-17 17:00:09 +0000 UTC" firstStartedPulling="2026-02-17 17:00:11.513674227 +0000 UTC m=+3421.808062788" lastFinishedPulling="2026-02-17 17:00:20.005950184 +0000 UTC m=+3430.300338745" observedRunningTime="2026-02-17 17:00:20.623495073 +0000 UTC m=+3430.917883634" watchObservedRunningTime="2026-02-17 17:00:20.624839646 +0000 UTC m=+3430.919228207" Feb 17 17:00:22 crc kubenswrapper[4874]: E0217 17:00:22.460790 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:00:24 crc kubenswrapper[4874]: E0217 17:00:24.461965 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:00:28 crc kubenswrapper[4874]: I0217 17:00:28.689573 4874 generic.go:334] "Generic (PLEG): container finished" podID="4fc34eca-3b52-4650-9c09-3c17befa87d5" containerID="58c8e708626f34b76fdd8e51890f7cbd12b4ddf809fcb7c6c83b5a7ae57840f8" exitCode=2 Feb 17 17:00:28 crc kubenswrapper[4874]: I0217 17:00:28.691189 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" event={"ID":"4fc34eca-3b52-4650-9c09-3c17befa87d5","Type":"ContainerDied","Data":"58c8e708626f34b76fdd8e51890f7cbd12b4ddf809fcb7c6c83b5a7ae57840f8"} Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.182423 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.208966 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.209024 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.218040 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-ssh-key-openstack-edpm-ipam\") pod \"4fc34eca-3b52-4650-9c09-3c17befa87d5\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.218245 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-inventory\") pod \"4fc34eca-3b52-4650-9c09-3c17befa87d5\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.218307 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nd6j\" (UniqueName: \"kubernetes.io/projected/4fc34eca-3b52-4650-9c09-3c17befa87d5-kube-api-access-2nd6j\") pod \"4fc34eca-3b52-4650-9c09-3c17befa87d5\" (UID: \"4fc34eca-3b52-4650-9c09-3c17befa87d5\") " Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.233139 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc34eca-3b52-4650-9c09-3c17befa87d5-kube-api-access-2nd6j" (OuterVolumeSpecName: "kube-api-access-2nd6j") pod "4fc34eca-3b52-4650-9c09-3c17befa87d5" (UID: "4fc34eca-3b52-4650-9c09-3c17befa87d5"). InnerVolumeSpecName "kube-api-access-2nd6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.249403 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-inventory" (OuterVolumeSpecName: "inventory") pod "4fc34eca-3b52-4650-9c09-3c17befa87d5" (UID: "4fc34eca-3b52-4650-9c09-3c17befa87d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.259672 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4fc34eca-3b52-4650-9c09-3c17befa87d5" (UID: "4fc34eca-3b52-4650-9c09-3c17befa87d5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.321191 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.321411 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fc34eca-3b52-4650-9c09-3c17befa87d5-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.321496 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nd6j\" (UniqueName: \"kubernetes.io/projected/4fc34eca-3b52-4650-9c09-3c17befa87d5-kube-api-access-2nd6j\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.713972 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" event={"ID":"4fc34eca-3b52-4650-9c09-3c17befa87d5","Type":"ContainerDied","Data":"18d799a11554c8430d9483fcc116c56b832fe0bcb4938eef5bbb001321633643"} Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.714016 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18d799a11554c8430d9483fcc116c56b832fe0bcb4938eef5bbb001321633643" Feb 17 17:00:30 crc kubenswrapper[4874]: I0217 17:00:30.714384 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4hs65" Feb 17 17:00:31 crc kubenswrapper[4874]: I0217 17:00:31.269485 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7r9gg" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="registry-server" probeResult="failure" output=< Feb 17 17:00:31 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 17:00:31 crc kubenswrapper[4874]: > Feb 17 17:00:34 crc kubenswrapper[4874]: E0217 17:00:34.459527 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:00:39 crc kubenswrapper[4874]: E0217 17:00:39.461680 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:00:41 crc kubenswrapper[4874]: I0217 17:00:41.257532 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7r9gg" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="registry-server" probeResult="failure" output=< Feb 17 17:00:41 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 17:00:41 crc kubenswrapper[4874]: > Feb 17 17:00:46 crc kubenswrapper[4874]: E0217 17:00:46.460956 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:00:50 crc kubenswrapper[4874]: I0217 17:00:50.735738 4874 scope.go:117] "RemoveContainer" containerID="6db37ebfe0bb9479cbd22bf667fa588d596c12ab1b724c719dde6332a3c41f74" Feb 17 17:00:51 crc kubenswrapper[4874]: I0217 17:00:51.260768 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7r9gg" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="registry-server" probeResult="failure" output=< Feb 17 17:00:51 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 17:00:51 crc kubenswrapper[4874]: > Feb 17 17:00:54 crc kubenswrapper[4874]: E0217 17:00:54.460882 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.189597 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522461-cnc94"] Feb 17 17:01:00 crc kubenswrapper[4874]: E0217 17:01:00.190723 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc34eca-3b52-4650-9c09-3c17befa87d5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.190741 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc34eca-3b52-4650-9c09-3c17befa87d5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.191054 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc34eca-3b52-4650-9c09-3c17befa87d5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.192039 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.218056 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-cnc94"] Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.323244 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75xsx\" (UniqueName: \"kubernetes.io/projected/61ff92e4-19df-453b-a07f-d3d953b6bacd-kube-api-access-75xsx\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.323638 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-config-data\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.323746 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-combined-ca-bundle\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.324277 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-fernet-keys\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.426423 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-fernet-keys\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.426528 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75xsx\" (UniqueName: \"kubernetes.io/projected/61ff92e4-19df-453b-a07f-d3d953b6bacd-kube-api-access-75xsx\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.426593 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-config-data\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.426687 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-combined-ca-bundle\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.432672 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-combined-ca-bundle\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.433127 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-fernet-keys\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.437954 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-config-data\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.442449 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75xsx\" (UniqueName: \"kubernetes.io/projected/61ff92e4-19df-453b-a07f-d3d953b6bacd-kube-api-access-75xsx\") pod \"keystone-cron-29522461-cnc94\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:00 crc kubenswrapper[4874]: I0217 17:01:00.513069 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:01 crc kubenswrapper[4874]: I0217 17:01:01.217440 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-cnc94"] Feb 17 17:01:01 crc kubenswrapper[4874]: I0217 17:01:01.269241 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7r9gg" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="registry-server" probeResult="failure" output=< Feb 17 17:01:01 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 17:01:01 crc kubenswrapper[4874]: > Feb 17 17:01:01 crc kubenswrapper[4874]: E0217 17:01:01.461480 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:01:02 crc kubenswrapper[4874]: I0217 17:01:02.217892 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-cnc94" event={"ID":"61ff92e4-19df-453b-a07f-d3d953b6bacd","Type":"ContainerStarted","Data":"bf027bb48f331a30b81c38050a4bb7880e153662faf4b0271a096f2980f22960"} Feb 17 17:01:02 crc kubenswrapper[4874]: I0217 17:01:02.218260 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-cnc94" event={"ID":"61ff92e4-19df-453b-a07f-d3d953b6bacd","Type":"ContainerStarted","Data":"d736e3fba8c01ec9dfc5df213f8bad4213902d95c12f5ffddd78b1f2c3af8d99"} Feb 17 17:01:02 crc kubenswrapper[4874]: I0217 17:01:02.239705 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522461-cnc94" podStartSLOduration=2.239685701 podStartE2EDuration="2.239685701s" podCreationTimestamp="2026-02-17 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:01:02.234177374 +0000 UTC m=+3472.528565945" watchObservedRunningTime="2026-02-17 17:01:02.239685701 +0000 UTC m=+3472.534074262" Feb 17 17:01:05 crc kubenswrapper[4874]: E0217 17:01:05.460129 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:01:07 crc kubenswrapper[4874]: I0217 17:01:07.274317 4874 generic.go:334] "Generic (PLEG): container finished" podID="61ff92e4-19df-453b-a07f-d3d953b6bacd" containerID="bf027bb48f331a30b81c38050a4bb7880e153662faf4b0271a096f2980f22960" exitCode=0 Feb 17 17:01:07 crc kubenswrapper[4874]: I0217 17:01:07.274388 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-cnc94" event={"ID":"61ff92e4-19df-453b-a07f-d3d953b6bacd","Type":"ContainerDied","Data":"bf027bb48f331a30b81c38050a4bb7880e153662faf4b0271a096f2980f22960"} Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.011333 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.110647 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75xsx\" (UniqueName: \"kubernetes.io/projected/61ff92e4-19df-453b-a07f-d3d953b6bacd-kube-api-access-75xsx\") pod \"61ff92e4-19df-453b-a07f-d3d953b6bacd\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.110952 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-config-data\") pod \"61ff92e4-19df-453b-a07f-d3d953b6bacd\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.110991 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-combined-ca-bundle\") pod \"61ff92e4-19df-453b-a07f-d3d953b6bacd\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.111270 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-fernet-keys\") pod \"61ff92e4-19df-453b-a07f-d3d953b6bacd\" (UID: \"61ff92e4-19df-453b-a07f-d3d953b6bacd\") " Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.119899 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ff92e4-19df-453b-a07f-d3d953b6bacd-kube-api-access-75xsx" (OuterVolumeSpecName: "kube-api-access-75xsx") pod "61ff92e4-19df-453b-a07f-d3d953b6bacd" (UID: "61ff92e4-19df-453b-a07f-d3d953b6bacd"). InnerVolumeSpecName "kube-api-access-75xsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.120216 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "61ff92e4-19df-453b-a07f-d3d953b6bacd" (UID: "61ff92e4-19df-453b-a07f-d3d953b6bacd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.171452 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61ff92e4-19df-453b-a07f-d3d953b6bacd" (UID: "61ff92e4-19df-453b-a07f-d3d953b6bacd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.189880 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-config-data" (OuterVolumeSpecName: "config-data") pod "61ff92e4-19df-453b-a07f-d3d953b6bacd" (UID: "61ff92e4-19df-453b-a07f-d3d953b6bacd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.215037 4874 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.215096 4874 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.215113 4874 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/61ff92e4-19df-453b-a07f-d3d953b6bacd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.215125 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75xsx\" (UniqueName: \"kubernetes.io/projected/61ff92e4-19df-453b-a07f-d3d953b6bacd-kube-api-access-75xsx\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.299362 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-cnc94" event={"ID":"61ff92e4-19df-453b-a07f-d3d953b6bacd","Type":"ContainerDied","Data":"d736e3fba8c01ec9dfc5df213f8bad4213902d95c12f5ffddd78b1f2c3af8d99"} Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.299404 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d736e3fba8c01ec9dfc5df213f8bad4213902d95c12f5ffddd78b1f2c3af8d99" Feb 17 17:01:09 crc kubenswrapper[4874]: I0217 17:01:09.299485 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-cnc94" Feb 17 17:01:10 crc kubenswrapper[4874]: I0217 17:01:10.279493 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:01:10 crc kubenswrapper[4874]: I0217 17:01:10.334791 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:01:11 crc kubenswrapper[4874]: I0217 17:01:11.093407 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7r9gg"] Feb 17 17:01:11 crc kubenswrapper[4874]: I0217 17:01:11.322591 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7r9gg" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="registry-server" containerID="cri-o://52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0" gracePeriod=2 Feb 17 17:01:11 crc kubenswrapper[4874]: I0217 17:01:11.853366 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.025093 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-utilities\") pod \"067b1563-08e1-4302-85e5-de5b52d71661\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.025354 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbd4n\" (UniqueName: \"kubernetes.io/projected/067b1563-08e1-4302-85e5-de5b52d71661-kube-api-access-jbd4n\") pod \"067b1563-08e1-4302-85e5-de5b52d71661\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.025475 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-catalog-content\") pod \"067b1563-08e1-4302-85e5-de5b52d71661\" (UID: \"067b1563-08e1-4302-85e5-de5b52d71661\") " Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.027185 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-utilities" (OuterVolumeSpecName: "utilities") pod "067b1563-08e1-4302-85e5-de5b52d71661" (UID: "067b1563-08e1-4302-85e5-de5b52d71661"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.045130 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067b1563-08e1-4302-85e5-de5b52d71661-kube-api-access-jbd4n" (OuterVolumeSpecName: "kube-api-access-jbd4n") pod "067b1563-08e1-4302-85e5-de5b52d71661" (UID: "067b1563-08e1-4302-85e5-de5b52d71661"). InnerVolumeSpecName "kube-api-access-jbd4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.128549 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.128585 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbd4n\" (UniqueName: \"kubernetes.io/projected/067b1563-08e1-4302-85e5-de5b52d71661-kube-api-access-jbd4n\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.269108 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "067b1563-08e1-4302-85e5-de5b52d71661" (UID: "067b1563-08e1-4302-85e5-de5b52d71661"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.334829 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067b1563-08e1-4302-85e5-de5b52d71661-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.338893 4874 generic.go:334] "Generic (PLEG): container finished" podID="067b1563-08e1-4302-85e5-de5b52d71661" containerID="52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0" exitCode=0 Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.338930 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9gg" event={"ID":"067b1563-08e1-4302-85e5-de5b52d71661","Type":"ContainerDied","Data":"52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0"} Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.339471 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9gg" event={"ID":"067b1563-08e1-4302-85e5-de5b52d71661","Type":"ContainerDied","Data":"0b12f6bdb3411743c71426b4da01698ef20155b768dd76bfed78c794f45b06f8"} Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.338988 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r9gg" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.339500 4874 scope.go:117] "RemoveContainer" containerID="52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.368222 4874 scope.go:117] "RemoveContainer" containerID="24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.384063 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7r9gg"] Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.392849 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7r9gg"] Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.410923 4874 scope.go:117] "RemoveContainer" containerID="4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.454263 4874 scope.go:117] "RemoveContainer" containerID="52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0" Feb 17 17:01:12 crc kubenswrapper[4874]: E0217 17:01:12.454848 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0\": container with ID starting with 52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0 not found: ID does not exist" containerID="52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.454900 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0"} err="failed to get container status \"52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0\": rpc error: code = NotFound desc = could not find container \"52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0\": container with ID starting with 52119f2f9f9555037f8e17e9ed7b271817a3312eb0aa16e11445902138ed8fb0 not found: ID does not exist" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.454954 4874 scope.go:117] "RemoveContainer" containerID="24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4" Feb 17 17:01:12 crc kubenswrapper[4874]: E0217 17:01:12.455207 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4\": container with ID starting with 24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4 not found: ID does not exist" containerID="24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.455256 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4"} err="failed to get container status \"24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4\": rpc error: code = NotFound desc = could not find container \"24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4\": container with ID starting with 24a20889e74ad1228707f924583680516adb01b8271f77d0d678387e4088c1f4 not found: ID does not exist" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.455271 4874 scope.go:117] "RemoveContainer" containerID="4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734" Feb 17 17:01:12 crc kubenswrapper[4874]: E0217 17:01:12.455675 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734\": container with ID starting with 4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734 not found: ID does not exist" containerID="4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.455725 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734"} err="failed to get container status \"4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734\": rpc error: code = NotFound desc = could not find container \"4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734\": container with ID starting with 4fa76167757bc02638d9ca88c764c75026ad6166261f930245b41aa5278e7734 not found: ID does not exist" Feb 17 17:01:12 crc kubenswrapper[4874]: E0217 17:01:12.464869 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:01:12 crc kubenswrapper[4874]: I0217 17:01:12.481713 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067b1563-08e1-4302-85e5-de5b52d71661" path="/var/lib/kubelet/pods/067b1563-08e1-4302-85e5-de5b52d71661/volumes" Feb 17 17:01:19 crc kubenswrapper[4874]: E0217 17:01:19.461400 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:01:23 crc kubenswrapper[4874]: E0217 17:01:23.459214 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:01:33 crc kubenswrapper[4874]: E0217 17:01:33.460870 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:01:34 crc kubenswrapper[4874]: E0217 17:01:34.460786 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:01:44 crc kubenswrapper[4874]: E0217 17:01:44.460210 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:01:45 crc kubenswrapper[4874]: E0217 17:01:45.459795 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.046529 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p"] Feb 17 17:01:48 crc kubenswrapper[4874]: E0217 17:01:48.047446 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="registry-server" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.047464 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="registry-server" Feb 17 17:01:48 crc kubenswrapper[4874]: E0217 17:01:48.047638 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="extract-content" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.047647 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="extract-content" Feb 17 17:01:48 crc kubenswrapper[4874]: E0217 17:01:48.047664 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="extract-utilities" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.047673 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="extract-utilities" Feb 17 17:01:48 crc kubenswrapper[4874]: E0217 17:01:48.047684 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ff92e4-19df-453b-a07f-d3d953b6bacd" containerName="keystone-cron" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.047691 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ff92e4-19df-453b-a07f-d3d953b6bacd" containerName="keystone-cron" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.047957 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="067b1563-08e1-4302-85e5-de5b52d71661" containerName="registry-server" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.047977 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ff92e4-19df-453b-a07f-d3d953b6bacd" containerName="keystone-cron" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.048952 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.055591 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.058696 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.059193 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.060711 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dsqj4" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.087868 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvch\" (UniqueName: \"kubernetes.io/projected/d7c983ae-0062-4104-b0b7-ee35f90aa93d-kube-api-access-nkvch\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xr64p\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.088044 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xr64p\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.088155 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xr64p\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.117096 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p"] Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.190520 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkvch\" (UniqueName: \"kubernetes.io/projected/d7c983ae-0062-4104-b0b7-ee35f90aa93d-kube-api-access-nkvch\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xr64p\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.190601 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xr64p\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.190629 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xr64p\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.199304 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xr64p\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.202921 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xr64p\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.211419 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkvch\" (UniqueName: \"kubernetes.io/projected/d7c983ae-0062-4104-b0b7-ee35f90aa93d-kube-api-access-nkvch\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xr64p\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:48 crc kubenswrapper[4874]: I0217 17:01:48.422776 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:01:49 crc kubenswrapper[4874]: I0217 17:01:49.008236 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p"] Feb 17 17:01:49 crc kubenswrapper[4874]: I0217 17:01:49.768405 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" event={"ID":"d7c983ae-0062-4104-b0b7-ee35f90aa93d","Type":"ContainerStarted","Data":"9c1b372ed59761df69b6aa8cbf7874cbb7e7e1888157197797d5efb50a3c1f7e"} Feb 17 17:01:49 crc kubenswrapper[4874]: I0217 17:01:49.768695 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" event={"ID":"d7c983ae-0062-4104-b0b7-ee35f90aa93d","Type":"ContainerStarted","Data":"9074e7a84e448946cd7d2e573a259311c5b5c71d5be45de625784495b94085e7"} Feb 17 17:01:49 crc kubenswrapper[4874]: I0217 17:01:49.794844 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" podStartSLOduration=1.284889221 podStartE2EDuration="1.794825044s" podCreationTimestamp="2026-02-17 17:01:48 +0000 UTC" firstStartedPulling="2026-02-17 17:01:49.0168333 +0000 UTC m=+3519.311221861" lastFinishedPulling="2026-02-17 17:01:49.526769123 +0000 UTC m=+3519.821157684" observedRunningTime="2026-02-17 17:01:49.78417054 +0000 UTC m=+3520.078559111" watchObservedRunningTime="2026-02-17 17:01:49.794825044 +0000 UTC m=+3520.089213605" Feb 17 17:01:57 crc kubenswrapper[4874]: E0217 17:01:57.460422 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:01:58 crc kubenswrapper[4874]: E0217 17:01:58.459825 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:02:08 crc kubenswrapper[4874]: E0217 17:02:08.459674 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:02:13 crc kubenswrapper[4874]: E0217 17:02:13.458575 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:02:19 crc kubenswrapper[4874]: E0217 17:02:19.459665 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:02:26 crc kubenswrapper[4874]: E0217 17:02:26.462016 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:02:27 crc kubenswrapper[4874]: I0217 17:02:27.724740 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:02:27 crc kubenswrapper[4874]: I0217 17:02:27.725108 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:02:34 crc kubenswrapper[4874]: E0217 17:02:34.460242 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:02:39 crc kubenswrapper[4874]: E0217 17:02:39.459799 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:02:45 crc kubenswrapper[4874]: E0217 17:02:45.459383 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:02:50 crc kubenswrapper[4874]: E0217 17:02:50.469189 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:02:56 crc kubenswrapper[4874]: E0217 17:02:56.462953 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:02:57 crc kubenswrapper[4874]: I0217 17:02:57.724906 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:02:57 crc kubenswrapper[4874]: I0217 17:02:57.725578 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:03:04 crc kubenswrapper[4874]: E0217 17:03:04.467236 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:03:10 crc kubenswrapper[4874]: E0217 17:03:10.469838 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:03:19 crc kubenswrapper[4874]: E0217 17:03:19.460564 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:03:25 crc kubenswrapper[4874]: E0217 17:03:25.460524 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:03:27 crc kubenswrapper[4874]: I0217 17:03:27.725299 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:03:27 crc kubenswrapper[4874]: I0217 17:03:27.725950 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:03:27 crc kubenswrapper[4874]: I0217 17:03:27.726011 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 17:03:27 crc kubenswrapper[4874]: I0217 17:03:27.727400 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4382ec70f731fe685b847e4b318346d4edba77d46bbd94f1ed3eb84b2e0ff786"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:03:27 crc kubenswrapper[4874]: I0217 17:03:27.727595 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://4382ec70f731fe685b847e4b318346d4edba77d46bbd94f1ed3eb84b2e0ff786" gracePeriod=600 Feb 17 17:03:28 crc kubenswrapper[4874]: I0217 17:03:28.837315 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="4382ec70f731fe685b847e4b318346d4edba77d46bbd94f1ed3eb84b2e0ff786" exitCode=0 Feb 17 17:03:28 crc kubenswrapper[4874]: I0217 17:03:28.837390 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"4382ec70f731fe685b847e4b318346d4edba77d46bbd94f1ed3eb84b2e0ff786"} Feb 17 17:03:28 crc kubenswrapper[4874]: I0217 17:03:28.837992 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc"} Feb 17 17:03:28 crc kubenswrapper[4874]: I0217 17:03:28.838026 4874 scope.go:117] "RemoveContainer" containerID="d5c223a2be3527987f38fd01537a80f90c508c61a8c6980341fe8404012e1d41" Feb 17 17:03:30 crc kubenswrapper[4874]: E0217 17:03:30.473566 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:03:38 crc kubenswrapper[4874]: E0217 17:03:38.461997 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:03:45 crc kubenswrapper[4874]: E0217 17:03:45.460287 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:03:53 crc kubenswrapper[4874]: E0217 17:03:53.459229 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:03:56 crc kubenswrapper[4874]: E0217 17:03:56.459611 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:04:07 crc kubenswrapper[4874]: E0217 17:04:07.460595 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:04:07 crc kubenswrapper[4874]: E0217 17:04:07.461222 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:04:18 crc kubenswrapper[4874]: E0217 17:04:18.462194 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:04:19 crc kubenswrapper[4874]: E0217 17:04:19.460792 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:04:30 crc kubenswrapper[4874]: E0217 17:04:30.480653 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:04:30 crc kubenswrapper[4874]: E0217 17:04:30.481549 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:04:41 crc kubenswrapper[4874]: I0217 17:04:41.459618 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:04:41 crc kubenswrapper[4874]: E0217 17:04:41.588787 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:04:41 crc kubenswrapper[4874]: E0217 17:04:41.588858 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:04:41 crc kubenswrapper[4874]: E0217 17:04:41.589007 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:04:41 crc kubenswrapper[4874]: E0217 17:04:41.590852 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:04:42 crc kubenswrapper[4874]: E0217 17:04:42.460424 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.537858 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5pxr"] Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.541863 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.553697 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5pxr"] Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.604667 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-utilities\") pod \"community-operators-z5pxr\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.604911 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcj8\" (UniqueName: \"kubernetes.io/projected/1639d8e3-16a4-4309-b5bc-004ed43db30d-kube-api-access-hzcj8\") pod \"community-operators-z5pxr\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.605254 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-catalog-content\") pod \"community-operators-z5pxr\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.706997 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-catalog-content\") pod \"community-operators-z5pxr\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.707385 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-utilities\") pod \"community-operators-z5pxr\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.707485 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzcj8\" (UniqueName: \"kubernetes.io/projected/1639d8e3-16a4-4309-b5bc-004ed43db30d-kube-api-access-hzcj8\") pod \"community-operators-z5pxr\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.707553 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-catalog-content\") pod \"community-operators-z5pxr\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.707840 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-utilities\") pod \"community-operators-z5pxr\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.734805 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzcj8\" (UniqueName: \"kubernetes.io/projected/1639d8e3-16a4-4309-b5bc-004ed43db30d-kube-api-access-hzcj8\") pod \"community-operators-z5pxr\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:47 crc kubenswrapper[4874]: I0217 17:04:47.866178 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:48 crc kubenswrapper[4874]: I0217 17:04:48.641108 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5pxr"] Feb 17 17:04:48 crc kubenswrapper[4874]: I0217 17:04:48.718862 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5pxr" event={"ID":"1639d8e3-16a4-4309-b5bc-004ed43db30d","Type":"ContainerStarted","Data":"086d32a1a5bf948d91c10d52f11f2f1a894b8df455c215c499654328dac65cda"} Feb 17 17:04:49 crc kubenswrapper[4874]: I0217 17:04:49.731130 4874 generic.go:334] "Generic (PLEG): container finished" podID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerID="b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009" exitCode=0 Feb 17 17:04:49 crc kubenswrapper[4874]: I0217 17:04:49.731179 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5pxr" event={"ID":"1639d8e3-16a4-4309-b5bc-004ed43db30d","Type":"ContainerDied","Data":"b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009"} Feb 17 17:04:50 crc kubenswrapper[4874]: I0217 17:04:50.743964 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5pxr" event={"ID":"1639d8e3-16a4-4309-b5bc-004ed43db30d","Type":"ContainerStarted","Data":"00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5"} Feb 17 17:04:52 crc kubenswrapper[4874]: I0217 17:04:52.775773 4874 generic.go:334] "Generic (PLEG): container finished" podID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerID="00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5" exitCode=0 Feb 17 17:04:52 crc kubenswrapper[4874]: I0217 17:04:52.775846 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5pxr" event={"ID":"1639d8e3-16a4-4309-b5bc-004ed43db30d","Type":"ContainerDied","Data":"00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5"} Feb 17 17:04:53 crc kubenswrapper[4874]: E0217 17:04:53.589664 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:04:53 crc kubenswrapper[4874]: E0217 17:04:53.589909 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:04:53 crc kubenswrapper[4874]: E0217 17:04:53.590017 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:04:53 crc kubenswrapper[4874]: E0217 17:04:53.591182 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:04:53 crc kubenswrapper[4874]: I0217 17:04:53.787999 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5pxr" event={"ID":"1639d8e3-16a4-4309-b5bc-004ed43db30d","Type":"ContainerStarted","Data":"95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087"} Feb 17 17:04:53 crc kubenswrapper[4874]: I0217 17:04:53.816246 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5pxr" podStartSLOduration=3.408199302 podStartE2EDuration="6.816227586s" podCreationTimestamp="2026-02-17 17:04:47 +0000 UTC" firstStartedPulling="2026-02-17 17:04:49.733088306 +0000 UTC m=+3700.027476857" lastFinishedPulling="2026-02-17 17:04:53.14111658 +0000 UTC m=+3703.435505141" observedRunningTime="2026-02-17 17:04:53.808833052 +0000 UTC m=+3704.103221603" watchObservedRunningTime="2026-02-17 17:04:53.816227586 +0000 UTC m=+3704.110616137" Feb 17 17:04:54 crc kubenswrapper[4874]: E0217 17:04:54.460177 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:04:57 crc kubenswrapper[4874]: I0217 17:04:57.866781 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:57 crc kubenswrapper[4874]: I0217 17:04:57.867381 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:57 crc kubenswrapper[4874]: I0217 17:04:57.912256 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:58 crc kubenswrapper[4874]: I0217 17:04:58.884130 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:04:58 crc kubenswrapper[4874]: I0217 17:04:58.944740 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5pxr"] Feb 17 17:05:00 crc kubenswrapper[4874]: I0217 17:05:00.851763 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z5pxr" podUID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerName="registry-server" containerID="cri-o://95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087" gracePeriod=2 Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.330239 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.427521 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzcj8\" (UniqueName: \"kubernetes.io/projected/1639d8e3-16a4-4309-b5bc-004ed43db30d-kube-api-access-hzcj8\") pod \"1639d8e3-16a4-4309-b5bc-004ed43db30d\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.427604 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-catalog-content\") pod \"1639d8e3-16a4-4309-b5bc-004ed43db30d\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.427649 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-utilities\") pod \"1639d8e3-16a4-4309-b5bc-004ed43db30d\" (UID: \"1639d8e3-16a4-4309-b5bc-004ed43db30d\") " Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.428696 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-utilities" (OuterVolumeSpecName: "utilities") pod "1639d8e3-16a4-4309-b5bc-004ed43db30d" (UID: "1639d8e3-16a4-4309-b5bc-004ed43db30d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.433203 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1639d8e3-16a4-4309-b5bc-004ed43db30d-kube-api-access-hzcj8" (OuterVolumeSpecName: "kube-api-access-hzcj8") pod "1639d8e3-16a4-4309-b5bc-004ed43db30d" (UID: "1639d8e3-16a4-4309-b5bc-004ed43db30d"). InnerVolumeSpecName "kube-api-access-hzcj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.531316 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzcj8\" (UniqueName: \"kubernetes.io/projected/1639d8e3-16a4-4309-b5bc-004ed43db30d-kube-api-access-hzcj8\") on node \"crc\" DevicePath \"\"" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.531362 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.765399 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1639d8e3-16a4-4309-b5bc-004ed43db30d" (UID: "1639d8e3-16a4-4309-b5bc-004ed43db30d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.838770 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639d8e3-16a4-4309-b5bc-004ed43db30d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.869795 4874 generic.go:334] "Generic (PLEG): container finished" podID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerID="95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087" exitCode=0 Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.869853 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5pxr" event={"ID":"1639d8e3-16a4-4309-b5bc-004ed43db30d","Type":"ContainerDied","Data":"95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087"} Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.869889 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5pxr" event={"ID":"1639d8e3-16a4-4309-b5bc-004ed43db30d","Type":"ContainerDied","Data":"086d32a1a5bf948d91c10d52f11f2f1a894b8df455c215c499654328dac65cda"} Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.869911 4874 scope.go:117] "RemoveContainer" containerID="95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.870199 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5pxr" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.901038 4874 scope.go:117] "RemoveContainer" containerID="00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.921694 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5pxr"] Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.933110 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z5pxr"] Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.950827 4874 scope.go:117] "RemoveContainer" containerID="b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.996780 4874 scope.go:117] "RemoveContainer" containerID="95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087" Feb 17 17:05:01 crc kubenswrapper[4874]: E0217 17:05:01.997447 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087\": container with ID starting with 95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087 not found: ID does not exist" containerID="95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.997487 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087"} err="failed to get container status \"95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087\": rpc error: code = NotFound desc = could not find container \"95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087\": container with ID starting with 95872daf2989fc2f904296f7d76f80ea9d81496f034f2facfa2360248139f087 not found: ID does not exist" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.997510 4874 scope.go:117] "RemoveContainer" containerID="00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5" Feb 17 17:05:01 crc kubenswrapper[4874]: E0217 17:05:01.997808 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5\": container with ID starting with 00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5 not found: ID does not exist" containerID="00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.997837 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5"} err="failed to get container status \"00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5\": rpc error: code = NotFound desc = could not find container \"00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5\": container with ID starting with 00e46c392aac16bd3680ed99f8f96551b6c21850ce9073cf18bf586a8609e6c5 not found: ID does not exist" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.997854 4874 scope.go:117] "RemoveContainer" containerID="b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009" Feb 17 17:05:01 crc kubenswrapper[4874]: E0217 17:05:01.998090 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009\": container with ID starting with b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009 not found: ID does not exist" containerID="b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009" Feb 17 17:05:01 crc kubenswrapper[4874]: I0217 17:05:01.998115 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009"} err="failed to get container status \"b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009\": rpc error: code = NotFound desc = could not find container \"b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009\": container with ID starting with b4a9aded3d0cc7009ad26bfc9e5a2c8a5b9982a163319fbac3956fdf7c2e1009 not found: ID does not exist" Feb 17 17:05:02 crc kubenswrapper[4874]: I0217 17:05:02.474266 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1639d8e3-16a4-4309-b5bc-004ed43db30d" path="/var/lib/kubelet/pods/1639d8e3-16a4-4309-b5bc-004ed43db30d/volumes" Feb 17 17:05:04 crc kubenswrapper[4874]: E0217 17:05:04.460213 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:05:08 crc kubenswrapper[4874]: E0217 17:05:08.460235 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:05:15 crc kubenswrapper[4874]: E0217 17:05:15.462732 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:05:22 crc kubenswrapper[4874]: E0217 17:05:22.460186 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:05:29 crc kubenswrapper[4874]: E0217 17:05:29.458956 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:05:34 crc kubenswrapper[4874]: E0217 17:05:34.459489 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:05:41 crc kubenswrapper[4874]: E0217 17:05:41.459791 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:05:46 crc kubenswrapper[4874]: E0217 17:05:46.460355 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.501850 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8nvl2"] Feb 17 17:05:49 crc kubenswrapper[4874]: E0217 17:05:49.502892 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerName="registry-server" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.502908 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerName="registry-server" Feb 17 17:05:49 crc kubenswrapper[4874]: E0217 17:05:49.502940 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerName="extract-content" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.502948 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerName="extract-content" Feb 17 17:05:49 crc kubenswrapper[4874]: E0217 17:05:49.502982 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerName="extract-utilities" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.502990 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerName="extract-utilities" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.503469 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="1639d8e3-16a4-4309-b5bc-004ed43db30d" containerName="registry-server" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.505554 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.519176 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nvl2"] Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.641523 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j848b\" (UniqueName: \"kubernetes.io/projected/f00a5f42-ecf1-4199-9b25-a34c170abaf4-kube-api-access-j848b\") pod \"redhat-marketplace-8nvl2\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.641856 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-utilities\") pod \"redhat-marketplace-8nvl2\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.642002 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-catalog-content\") pod \"redhat-marketplace-8nvl2\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.743977 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j848b\" (UniqueName: \"kubernetes.io/projected/f00a5f42-ecf1-4199-9b25-a34c170abaf4-kube-api-access-j848b\") pod \"redhat-marketplace-8nvl2\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.744044 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-utilities\") pod \"redhat-marketplace-8nvl2\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.744171 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-catalog-content\") pod \"redhat-marketplace-8nvl2\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.744812 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-utilities\") pod \"redhat-marketplace-8nvl2\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.744844 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-catalog-content\") pod \"redhat-marketplace-8nvl2\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.765024 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j848b\" (UniqueName: \"kubernetes.io/projected/f00a5f42-ecf1-4199-9b25-a34c170abaf4-kube-api-access-j848b\") pod \"redhat-marketplace-8nvl2\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:49 crc kubenswrapper[4874]: I0217 17:05:49.838450 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:50 crc kubenswrapper[4874]: I0217 17:05:50.367569 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nvl2"] Feb 17 17:05:50 crc kubenswrapper[4874]: I0217 17:05:50.387238 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nvl2" event={"ID":"f00a5f42-ecf1-4199-9b25-a34c170abaf4","Type":"ContainerStarted","Data":"063426c7101b307fc92a923a334857b3e6f13a0d4444f43e63c6c8c3303083ff"} Feb 17 17:05:51 crc kubenswrapper[4874]: I0217 17:05:51.403430 4874 generic.go:334] "Generic (PLEG): container finished" podID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerID="3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c" exitCode=0 Feb 17 17:05:51 crc kubenswrapper[4874]: I0217 17:05:51.403745 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nvl2" event={"ID":"f00a5f42-ecf1-4199-9b25-a34c170abaf4","Type":"ContainerDied","Data":"3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c"} Feb 17 17:05:52 crc kubenswrapper[4874]: I0217 17:05:52.415188 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nvl2" event={"ID":"f00a5f42-ecf1-4199-9b25-a34c170abaf4","Type":"ContainerStarted","Data":"90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54"} Feb 17 17:05:53 crc kubenswrapper[4874]: I0217 17:05:53.428023 4874 generic.go:334] "Generic (PLEG): container finished" podID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerID="90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54" exitCode=0 Feb 17 17:05:53 crc kubenswrapper[4874]: I0217 17:05:53.428097 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nvl2" event={"ID":"f00a5f42-ecf1-4199-9b25-a34c170abaf4","Type":"ContainerDied","Data":"90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54"} Feb 17 17:05:54 crc kubenswrapper[4874]: I0217 17:05:54.439993 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nvl2" event={"ID":"f00a5f42-ecf1-4199-9b25-a34c170abaf4","Type":"ContainerStarted","Data":"59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f"} Feb 17 17:05:55 crc kubenswrapper[4874]: E0217 17:05:55.459044 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:05:57 crc kubenswrapper[4874]: I0217 17:05:57.724872 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:05:57 crc kubenswrapper[4874]: I0217 17:05:57.725217 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:05:59 crc kubenswrapper[4874]: I0217 17:05:59.839443 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:59 crc kubenswrapper[4874]: I0217 17:05:59.839775 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:59 crc kubenswrapper[4874]: I0217 17:05:59.895222 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:05:59 crc kubenswrapper[4874]: I0217 17:05:59.924603 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8nvl2" podStartSLOduration=8.472724039 podStartE2EDuration="10.924580925s" podCreationTimestamp="2026-02-17 17:05:49 +0000 UTC" firstStartedPulling="2026-02-17 17:05:51.406722857 +0000 UTC m=+3761.701111418" lastFinishedPulling="2026-02-17 17:05:53.858579743 +0000 UTC m=+3764.152968304" observedRunningTime="2026-02-17 17:05:54.466599704 +0000 UTC m=+3764.760988265" watchObservedRunningTime="2026-02-17 17:05:59.924580925 +0000 UTC m=+3770.218969486" Feb 17 17:06:00 crc kubenswrapper[4874]: I0217 17:06:00.558995 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:06:00 crc kubenswrapper[4874]: I0217 17:06:00.620829 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nvl2"] Feb 17 17:06:01 crc kubenswrapper[4874]: E0217 17:06:01.463558 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:06:02 crc kubenswrapper[4874]: I0217 17:06:02.521887 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8nvl2" podUID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerName="registry-server" containerID="cri-o://59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f" gracePeriod=2 Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.042377 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.230772 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-utilities\") pod \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.230895 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j848b\" (UniqueName: \"kubernetes.io/projected/f00a5f42-ecf1-4199-9b25-a34c170abaf4-kube-api-access-j848b\") pod \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.231008 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-catalog-content\") pod \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\" (UID: \"f00a5f42-ecf1-4199-9b25-a34c170abaf4\") " Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.233024 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-utilities" (OuterVolumeSpecName: "utilities") pod "f00a5f42-ecf1-4199-9b25-a34c170abaf4" (UID: "f00a5f42-ecf1-4199-9b25-a34c170abaf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.237842 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00a5f42-ecf1-4199-9b25-a34c170abaf4-kube-api-access-j848b" (OuterVolumeSpecName: "kube-api-access-j848b") pod "f00a5f42-ecf1-4199-9b25-a34c170abaf4" (UID: "f00a5f42-ecf1-4199-9b25-a34c170abaf4"). InnerVolumeSpecName "kube-api-access-j848b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.270813 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f00a5f42-ecf1-4199-9b25-a34c170abaf4" (UID: "f00a5f42-ecf1-4199-9b25-a34c170abaf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.334113 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.334187 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j848b\" (UniqueName: \"kubernetes.io/projected/f00a5f42-ecf1-4199-9b25-a34c170abaf4-kube-api-access-j848b\") on node \"crc\" DevicePath \"\"" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.334201 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00a5f42-ecf1-4199-9b25-a34c170abaf4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.539068 4874 generic.go:334] "Generic (PLEG): container finished" podID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerID="59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f" exitCode=0 Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.539233 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nvl2" event={"ID":"f00a5f42-ecf1-4199-9b25-a34c170abaf4","Type":"ContainerDied","Data":"59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f"} Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.539305 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nvl2" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.539681 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nvl2" event={"ID":"f00a5f42-ecf1-4199-9b25-a34c170abaf4","Type":"ContainerDied","Data":"063426c7101b307fc92a923a334857b3e6f13a0d4444f43e63c6c8c3303083ff"} Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.539728 4874 scope.go:117] "RemoveContainer" containerID="59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.599930 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nvl2"] Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.609461 4874 scope.go:117] "RemoveContainer" containerID="90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.615874 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nvl2"] Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.635211 4874 scope.go:117] "RemoveContainer" containerID="3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.704439 4874 scope.go:117] "RemoveContainer" containerID="59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f" Feb 17 17:06:03 crc kubenswrapper[4874]: E0217 17:06:03.705951 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f\": container with ID starting with 59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f not found: ID does not exist" containerID="59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.705994 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f"} err="failed to get container status \"59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f\": rpc error: code = NotFound desc = could not find container \"59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f\": container with ID starting with 59e09bbdfdf466ba9dc7b8c4eded49a4a413116627b2486917b82c048c34580f not found: ID does not exist" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.706026 4874 scope.go:117] "RemoveContainer" containerID="90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54" Feb 17 17:06:03 crc kubenswrapper[4874]: E0217 17:06:03.706627 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54\": container with ID starting with 90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54 not found: ID does not exist" containerID="90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.706709 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54"} err="failed to get container status \"90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54\": rpc error: code = NotFound desc = could not find container \"90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54\": container with ID starting with 90cdc458f0a558e7c00c34d1f1982f9dfd34096ae2e41ddc667f4de5fc715f54 not found: ID does not exist" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.706764 4874 scope.go:117] "RemoveContainer" containerID="3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c" Feb 17 17:06:03 crc kubenswrapper[4874]: E0217 17:06:03.707764 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c\": container with ID starting with 3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c not found: ID does not exist" containerID="3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c" Feb 17 17:06:03 crc kubenswrapper[4874]: I0217 17:06:03.707804 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c"} err="failed to get container status \"3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c\": rpc error: code = NotFound desc = could not find container \"3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c\": container with ID starting with 3220736a1b881150436d07ac849c9e86d3886610aa5d821f4a87c8f094a9f19c not found: ID does not exist" Feb 17 17:06:04 crc kubenswrapper[4874]: I0217 17:06:04.476005 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" path="/var/lib/kubelet/pods/f00a5f42-ecf1-4199-9b25-a34c170abaf4/volumes" Feb 17 17:06:10 crc kubenswrapper[4874]: E0217 17:06:10.471357 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:06:14 crc kubenswrapper[4874]: E0217 17:06:14.462529 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:06:22 crc kubenswrapper[4874]: E0217 17:06:22.461183 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:06:25 crc kubenswrapper[4874]: E0217 17:06:25.460280 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:06:27 crc kubenswrapper[4874]: I0217 17:06:27.724692 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:06:27 crc kubenswrapper[4874]: I0217 17:06:27.725050 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:06:36 crc kubenswrapper[4874]: E0217 17:06:36.459524 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:06:39 crc kubenswrapper[4874]: E0217 17:06:39.460967 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:06:51 crc kubenswrapper[4874]: E0217 17:06:51.458594 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:06:54 crc kubenswrapper[4874]: E0217 17:06:54.459760 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:06:57 crc kubenswrapper[4874]: I0217 17:06:57.724655 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:06:57 crc kubenswrapper[4874]: I0217 17:06:57.725450 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:06:57 crc kubenswrapper[4874]: I0217 17:06:57.725527 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 17:06:57 crc kubenswrapper[4874]: I0217 17:06:57.726422 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:06:57 crc kubenswrapper[4874]: I0217 17:06:57.726466 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" gracePeriod=600 Feb 17 17:06:57 crc kubenswrapper[4874]: E0217 17:06:57.852227 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:06:58 crc kubenswrapper[4874]: I0217 17:06:58.190682 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" exitCode=0 Feb 17 17:06:58 crc kubenswrapper[4874]: I0217 17:06:58.190779 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc"} Feb 17 17:06:58 crc kubenswrapper[4874]: I0217 17:06:58.190863 4874 scope.go:117] "RemoveContainer" containerID="4382ec70f731fe685b847e4b318346d4edba77d46bbd94f1ed3eb84b2e0ff786" Feb 17 17:06:58 crc kubenswrapper[4874]: I0217 17:06:58.191661 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:06:58 crc kubenswrapper[4874]: E0217 17:06:58.192306 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:07:04 crc kubenswrapper[4874]: E0217 17:07:04.459143 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:07:06 crc kubenswrapper[4874]: E0217 17:07:06.461235 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:07:11 crc kubenswrapper[4874]: I0217 17:07:11.457319 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:07:11 crc kubenswrapper[4874]: E0217 17:07:11.457848 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:07:17 crc kubenswrapper[4874]: E0217 17:07:17.459392 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:07:20 crc kubenswrapper[4874]: E0217 17:07:20.465725 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:07:26 crc kubenswrapper[4874]: I0217 17:07:26.458773 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:07:26 crc kubenswrapper[4874]: E0217 17:07:26.459606 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:07:31 crc kubenswrapper[4874]: E0217 17:07:31.460633 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:07:31 crc kubenswrapper[4874]: E0217 17:07:31.460818 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:07:37 crc kubenswrapper[4874]: I0217 17:07:37.458208 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:07:37 crc kubenswrapper[4874]: E0217 17:07:37.459066 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:07:42 crc kubenswrapper[4874]: E0217 17:07:42.462442 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:07:44 crc kubenswrapper[4874]: E0217 17:07:44.461374 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:07:51 crc kubenswrapper[4874]: I0217 17:07:51.457183 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:07:51 crc kubenswrapper[4874]: E0217 17:07:51.457955 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:07:54 crc kubenswrapper[4874]: E0217 17:07:54.463624 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:07:58 crc kubenswrapper[4874]: E0217 17:07:58.461251 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:08:00 crc kubenswrapper[4874]: I0217 17:08:00.929164 4874 generic.go:334] "Generic (PLEG): container finished" podID="d7c983ae-0062-4104-b0b7-ee35f90aa93d" containerID="9c1b372ed59761df69b6aa8cbf7874cbb7e7e1888157197797d5efb50a3c1f7e" exitCode=2 Feb 17 17:08:00 crc kubenswrapper[4874]: I0217 17:08:00.929223 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" event={"ID":"d7c983ae-0062-4104-b0b7-ee35f90aa93d","Type":"ContainerDied","Data":"9c1b372ed59761df69b6aa8cbf7874cbb7e7e1888157197797d5efb50a3c1f7e"} Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.649115 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.837840 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-inventory\") pod \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.837912 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkvch\" (UniqueName: \"kubernetes.io/projected/d7c983ae-0062-4104-b0b7-ee35f90aa93d-kube-api-access-nkvch\") pod \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.838036 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-ssh-key-openstack-edpm-ipam\") pod \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\" (UID: \"d7c983ae-0062-4104-b0b7-ee35f90aa93d\") " Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.844889 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c983ae-0062-4104-b0b7-ee35f90aa93d-kube-api-access-nkvch" (OuterVolumeSpecName: "kube-api-access-nkvch") pod "d7c983ae-0062-4104-b0b7-ee35f90aa93d" (UID: "d7c983ae-0062-4104-b0b7-ee35f90aa93d"). InnerVolumeSpecName "kube-api-access-nkvch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.876665 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-inventory" (OuterVolumeSpecName: "inventory") pod "d7c983ae-0062-4104-b0b7-ee35f90aa93d" (UID: "d7c983ae-0062-4104-b0b7-ee35f90aa93d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.877682 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d7c983ae-0062-4104-b0b7-ee35f90aa93d" (UID: "d7c983ae-0062-4104-b0b7-ee35f90aa93d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.942684 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.942728 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkvch\" (UniqueName: \"kubernetes.io/projected/d7c983ae-0062-4104-b0b7-ee35f90aa93d-kube-api-access-nkvch\") on node \"crc\" DevicePath \"\"" Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.942739 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7c983ae-0062-4104-b0b7-ee35f90aa93d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.950315 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" event={"ID":"d7c983ae-0062-4104-b0b7-ee35f90aa93d","Type":"ContainerDied","Data":"9074e7a84e448946cd7d2e573a259311c5b5c71d5be45de625784495b94085e7"} Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.950380 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9074e7a84e448946cd7d2e573a259311c5b5c71d5be45de625784495b94085e7" Feb 17 17:08:02 crc kubenswrapper[4874]: I0217 17:08:02.950344 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xr64p" Feb 17 17:08:05 crc kubenswrapper[4874]: I0217 17:08:05.457870 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:08:05 crc kubenswrapper[4874]: E0217 17:08:05.458679 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:08:06 crc kubenswrapper[4874]: E0217 17:08:06.460227 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:08:09 crc kubenswrapper[4874]: E0217 17:08:09.459664 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:08:20 crc kubenswrapper[4874]: I0217 17:08:20.466576 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:08:20 crc kubenswrapper[4874]: E0217 17:08:20.467680 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:08:20 crc kubenswrapper[4874]: E0217 17:08:20.470124 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:08:24 crc kubenswrapper[4874]: E0217 17:08:24.459783 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:08:33 crc kubenswrapper[4874]: E0217 17:08:33.461706 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:08:34 crc kubenswrapper[4874]: I0217 17:08:34.457270 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:08:34 crc kubenswrapper[4874]: E0217 17:08:34.457918 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:08:39 crc kubenswrapper[4874]: E0217 17:08:39.459770 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:08:44 crc kubenswrapper[4874]: E0217 17:08:44.459304 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:08:47 crc kubenswrapper[4874]: I0217 17:08:47.458438 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:08:47 crc kubenswrapper[4874]: E0217 17:08:47.458938 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:08:51 crc kubenswrapper[4874]: E0217 17:08:51.460252 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:08:59 crc kubenswrapper[4874]: I0217 17:08:59.457408 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:08:59 crc kubenswrapper[4874]: E0217 17:08:59.458297 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:08:59 crc kubenswrapper[4874]: E0217 17:08:59.459156 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:09:02 crc kubenswrapper[4874]: E0217 17:09:02.459847 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:09:14 crc kubenswrapper[4874]: I0217 17:09:14.458969 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:09:14 crc kubenswrapper[4874]: E0217 17:09:14.459606 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:09:14 crc kubenswrapper[4874]: E0217 17:09:14.459683 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:09:17 crc kubenswrapper[4874]: E0217 17:09:17.459576 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:09:26 crc kubenswrapper[4874]: I0217 17:09:26.459548 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:09:26 crc kubenswrapper[4874]: E0217 17:09:26.460866 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:09:26 crc kubenswrapper[4874]: E0217 17:09:26.461362 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:09:31 crc kubenswrapper[4874]: E0217 17:09:31.459615 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.113814 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pdzsk"] Feb 17 17:09:37 crc kubenswrapper[4874]: E0217 17:09:37.115285 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerName="extract-utilities" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.115308 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerName="extract-utilities" Feb 17 17:09:37 crc kubenswrapper[4874]: E0217 17:09:37.115328 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerName="registry-server" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.115340 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerName="registry-server" Feb 17 17:09:37 crc kubenswrapper[4874]: E0217 17:09:37.115398 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c983ae-0062-4104-b0b7-ee35f90aa93d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.115416 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c983ae-0062-4104-b0b7-ee35f90aa93d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:09:37 crc kubenswrapper[4874]: E0217 17:09:37.115436 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerName="extract-content" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.115447 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerName="extract-content" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.115872 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c983ae-0062-4104-b0b7-ee35f90aa93d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.115907 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00a5f42-ecf1-4199-9b25-a34c170abaf4" containerName="registry-server" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.118999 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.153409 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdzsk"] Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.182148 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-utilities\") pod \"certified-operators-pdzsk\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.182287 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-catalog-content\") pod \"certified-operators-pdzsk\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.182334 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5sl\" (UniqueName: \"kubernetes.io/projected/42ec4c46-de35-4de3-b979-95fa51be2062-kube-api-access-gk5sl\") pod \"certified-operators-pdzsk\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.284504 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-catalog-content\") pod \"certified-operators-pdzsk\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.284573 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk5sl\" (UniqueName: \"kubernetes.io/projected/42ec4c46-de35-4de3-b979-95fa51be2062-kube-api-access-gk5sl\") pod \"certified-operators-pdzsk\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.284743 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-utilities\") pod \"certified-operators-pdzsk\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.285119 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-catalog-content\") pod \"certified-operators-pdzsk\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.285226 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-utilities\") pod \"certified-operators-pdzsk\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.308312 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk5sl\" (UniqueName: \"kubernetes.io/projected/42ec4c46-de35-4de3-b979-95fa51be2062-kube-api-access-gk5sl\") pod \"certified-operators-pdzsk\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:37 crc kubenswrapper[4874]: I0217 17:09:37.441867 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:38 crc kubenswrapper[4874]: I0217 17:09:38.008605 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdzsk"] Feb 17 17:09:38 crc kubenswrapper[4874]: I0217 17:09:38.465102 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:09:38 crc kubenswrapper[4874]: E0217 17:09:38.465720 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:09:38 crc kubenswrapper[4874]: I0217 17:09:38.972898 4874 generic.go:334] "Generic (PLEG): container finished" podID="42ec4c46-de35-4de3-b979-95fa51be2062" containerID="96f55cd80a8d2c2ae0ade72e42fccc20b678966172bae5b9d1288e4f15b43b17" exitCode=0 Feb 17 17:09:38 crc kubenswrapper[4874]: I0217 17:09:38.972944 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdzsk" event={"ID":"42ec4c46-de35-4de3-b979-95fa51be2062","Type":"ContainerDied","Data":"96f55cd80a8d2c2ae0ade72e42fccc20b678966172bae5b9d1288e4f15b43b17"} Feb 17 17:09:38 crc kubenswrapper[4874]: I0217 17:09:38.972973 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdzsk" event={"ID":"42ec4c46-de35-4de3-b979-95fa51be2062","Type":"ContainerStarted","Data":"17c4535e195fd8a356eadb2cca9029dc9005d09d45dc25ab76a6137eca8841b6"} Feb 17 17:09:39 crc kubenswrapper[4874]: E0217 17:09:39.458847 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:09:39 crc kubenswrapper[4874]: I0217 17:09:39.983616 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdzsk" event={"ID":"42ec4c46-de35-4de3-b979-95fa51be2062","Type":"ContainerStarted","Data":"7ba0f9d1978d7131f6f8c4101c8b1f547968547e8bd668f473d0d32eb7ab56a9"} Feb 17 17:09:42 crc kubenswrapper[4874]: I0217 17:09:42.004207 4874 generic.go:334] "Generic (PLEG): container finished" podID="42ec4c46-de35-4de3-b979-95fa51be2062" containerID="7ba0f9d1978d7131f6f8c4101c8b1f547968547e8bd668f473d0d32eb7ab56a9" exitCode=0 Feb 17 17:09:42 crc kubenswrapper[4874]: I0217 17:09:42.004270 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdzsk" event={"ID":"42ec4c46-de35-4de3-b979-95fa51be2062","Type":"ContainerDied","Data":"7ba0f9d1978d7131f6f8c4101c8b1f547968547e8bd668f473d0d32eb7ab56a9"} Feb 17 17:09:42 crc kubenswrapper[4874]: I0217 17:09:42.007888 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:09:44 crc kubenswrapper[4874]: I0217 17:09:44.030458 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdzsk" event={"ID":"42ec4c46-de35-4de3-b979-95fa51be2062","Type":"ContainerStarted","Data":"1e776a6b2479a83db921657fb3fcd42f6b6d24385c5597ba8802b70192a305a3"} Feb 17 17:09:44 crc kubenswrapper[4874]: I0217 17:09:44.065836 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pdzsk" podStartSLOduration=3.5527428739999998 podStartE2EDuration="7.065809123s" podCreationTimestamp="2026-02-17 17:09:37 +0000 UTC" firstStartedPulling="2026-02-17 17:09:38.975383059 +0000 UTC m=+3989.269771620" lastFinishedPulling="2026-02-17 17:09:42.488449308 +0000 UTC m=+3992.782837869" observedRunningTime="2026-02-17 17:09:44.054235906 +0000 UTC m=+3994.348624457" watchObservedRunningTime="2026-02-17 17:09:44.065809123 +0000 UTC m=+3994.360197694" Feb 17 17:09:45 crc kubenswrapper[4874]: E0217 17:09:45.461072 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:09:47 crc kubenswrapper[4874]: I0217 17:09:47.442154 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:47 crc kubenswrapper[4874]: I0217 17:09:47.444904 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:47 crc kubenswrapper[4874]: I0217 17:09:47.491015 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:48 crc kubenswrapper[4874]: I0217 17:09:48.208569 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:49 crc kubenswrapper[4874]: I0217 17:09:49.301770 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdzsk"] Feb 17 17:09:51 crc kubenswrapper[4874]: I0217 17:09:51.095752 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pdzsk" podUID="42ec4c46-de35-4de3-b979-95fa51be2062" containerName="registry-server" containerID="cri-o://1e776a6b2479a83db921657fb3fcd42f6b6d24385c5597ba8802b70192a305a3" gracePeriod=2 Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.108013 4874 generic.go:334] "Generic (PLEG): container finished" podID="42ec4c46-de35-4de3-b979-95fa51be2062" containerID="1e776a6b2479a83db921657fb3fcd42f6b6d24385c5597ba8802b70192a305a3" exitCode=0 Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.108101 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdzsk" event={"ID":"42ec4c46-de35-4de3-b979-95fa51be2062","Type":"ContainerDied","Data":"1e776a6b2479a83db921657fb3fcd42f6b6d24385c5597ba8802b70192a305a3"} Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.108466 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdzsk" event={"ID":"42ec4c46-de35-4de3-b979-95fa51be2062","Type":"ContainerDied","Data":"17c4535e195fd8a356eadb2cca9029dc9005d09d45dc25ab76a6137eca8841b6"} Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.108478 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17c4535e195fd8a356eadb2cca9029dc9005d09d45dc25ab76a6137eca8841b6" Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.520894 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:52 crc kubenswrapper[4874]: E0217 17:09:52.600272 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:09:52 crc kubenswrapper[4874]: E0217 17:09:52.600346 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:09:52 crc kubenswrapper[4874]: E0217 17:09:52.600846 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:09:52 crc kubenswrapper[4874]: E0217 17:09:52.602114 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.669842 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-catalog-content\") pod \"42ec4c46-de35-4de3-b979-95fa51be2062\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.669992 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-utilities\") pod \"42ec4c46-de35-4de3-b979-95fa51be2062\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.670036 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk5sl\" (UniqueName: \"kubernetes.io/projected/42ec4c46-de35-4de3-b979-95fa51be2062-kube-api-access-gk5sl\") pod \"42ec4c46-de35-4de3-b979-95fa51be2062\" (UID: \"42ec4c46-de35-4de3-b979-95fa51be2062\") " Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.671382 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-utilities" (OuterVolumeSpecName: "utilities") pod "42ec4c46-de35-4de3-b979-95fa51be2062" (UID: "42ec4c46-de35-4de3-b979-95fa51be2062"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.680005 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ec4c46-de35-4de3-b979-95fa51be2062-kube-api-access-gk5sl" (OuterVolumeSpecName: "kube-api-access-gk5sl") pod "42ec4c46-de35-4de3-b979-95fa51be2062" (UID: "42ec4c46-de35-4de3-b979-95fa51be2062"). InnerVolumeSpecName "kube-api-access-gk5sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.719034 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42ec4c46-de35-4de3-b979-95fa51be2062" (UID: "42ec4c46-de35-4de3-b979-95fa51be2062"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.772592 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.772811 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ec4c46-de35-4de3-b979-95fa51be2062-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:52 crc kubenswrapper[4874]: I0217 17:09:52.772890 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk5sl\" (UniqueName: \"kubernetes.io/projected/42ec4c46-de35-4de3-b979-95fa51be2062-kube-api-access-gk5sl\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:53 crc kubenswrapper[4874]: I0217 17:09:53.117780 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdzsk" Feb 17 17:09:53 crc kubenswrapper[4874]: I0217 17:09:53.152471 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdzsk"] Feb 17 17:09:53 crc kubenswrapper[4874]: I0217 17:09:53.161459 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pdzsk"] Feb 17 17:09:53 crc kubenswrapper[4874]: I0217 17:09:53.457559 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:09:53 crc kubenswrapper[4874]: E0217 17:09:53.457953 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:09:54 crc kubenswrapper[4874]: I0217 17:09:54.470455 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ec4c46-de35-4de3-b979-95fa51be2062" path="/var/lib/kubelet/pods/42ec4c46-de35-4de3-b979-95fa51be2062/volumes" Feb 17 17:09:58 crc kubenswrapper[4874]: E0217 17:09:58.599045 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:09:58 crc kubenswrapper[4874]: E0217 17:09:58.599668 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:09:58 crc kubenswrapper[4874]: E0217 17:09:58.599815 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:09:58 crc kubenswrapper[4874]: E0217 17:09:58.601140 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:10:04 crc kubenswrapper[4874]: E0217 17:10:04.459489 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:10:05 crc kubenswrapper[4874]: I0217 17:10:05.457558 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:10:05 crc kubenswrapper[4874]: E0217 17:10:05.458383 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:10:13 crc kubenswrapper[4874]: E0217 17:10:13.459154 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:10:15 crc kubenswrapper[4874]: E0217 17:10:15.460531 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:10:20 crc kubenswrapper[4874]: I0217 17:10:20.473313 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:10:20 crc kubenswrapper[4874]: E0217 17:10:20.474035 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:10:25 crc kubenswrapper[4874]: E0217 17:10:25.459665 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:10:30 crc kubenswrapper[4874]: E0217 17:10:30.462781 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:10:35 crc kubenswrapper[4874]: I0217 17:10:35.458273 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:10:35 crc kubenswrapper[4874]: E0217 17:10:35.459183 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.033506 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9"] Feb 17 17:10:40 crc kubenswrapper[4874]: E0217 17:10:40.036065 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ec4c46-de35-4de3-b979-95fa51be2062" containerName="extract-utilities" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.036236 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ec4c46-de35-4de3-b979-95fa51be2062" containerName="extract-utilities" Feb 17 17:10:40 crc kubenswrapper[4874]: E0217 17:10:40.036829 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ec4c46-de35-4de3-b979-95fa51be2062" containerName="extract-content" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.036916 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ec4c46-de35-4de3-b979-95fa51be2062" containerName="extract-content" Feb 17 17:10:40 crc kubenswrapper[4874]: E0217 17:10:40.037038 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ec4c46-de35-4de3-b979-95fa51be2062" containerName="registry-server" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.037193 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ec4c46-de35-4de3-b979-95fa51be2062" containerName="registry-server" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.037661 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ec4c46-de35-4de3-b979-95fa51be2062" containerName="registry-server" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.038771 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.041458 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.041507 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dsqj4" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.042209 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.042608 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.052482 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9"] Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.125723 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.125958 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnrm2\" (UniqueName: \"kubernetes.io/projected/7489caf0-d625-4d40-829f-34558a80ad7a-kube-api-access-cnrm2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.126051 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.227853 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnrm2\" (UniqueName: \"kubernetes.io/projected/7489caf0-d625-4d40-829f-34558a80ad7a-kube-api-access-cnrm2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.227961 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.228055 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.235906 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.244111 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.269724 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnrm2\" (UniqueName: \"kubernetes.io/projected/7489caf0-d625-4d40-829f-34558a80ad7a-kube-api-access-cnrm2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: I0217 17:10:40.374938 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:10:40 crc kubenswrapper[4874]: E0217 17:10:40.475273 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:10:41 crc kubenswrapper[4874]: I0217 17:10:41.106401 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9"] Feb 17 17:10:41 crc kubenswrapper[4874]: W0217 17:10:41.109220 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7489caf0_d625_4d40_829f_34558a80ad7a.slice/crio-2c8e05e87ef67166dace8185c31392a2fdadc34cefc669adfb7a97bd6dde388c WatchSource:0}: Error finding container 2c8e05e87ef67166dace8185c31392a2fdadc34cefc669adfb7a97bd6dde388c: Status 404 returned error can't find the container with id 2c8e05e87ef67166dace8185c31392a2fdadc34cefc669adfb7a97bd6dde388c Feb 17 17:10:41 crc kubenswrapper[4874]: I0217 17:10:41.149509 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" event={"ID":"7489caf0-d625-4d40-829f-34558a80ad7a","Type":"ContainerStarted","Data":"2c8e05e87ef67166dace8185c31392a2fdadc34cefc669adfb7a97bd6dde388c"} Feb 17 17:10:41 crc kubenswrapper[4874]: E0217 17:10:41.458857 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:10:42 crc kubenswrapper[4874]: I0217 17:10:42.159777 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" event={"ID":"7489caf0-d625-4d40-829f-34558a80ad7a","Type":"ContainerStarted","Data":"17d661093107fd53a1fe08533af7dc3229a52d2c22a31ed23c3233772a0e0422"} Feb 17 17:10:42 crc kubenswrapper[4874]: I0217 17:10:42.179362 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" podStartSLOduration=1.774581737 podStartE2EDuration="2.179345957s" podCreationTimestamp="2026-02-17 17:10:40 +0000 UTC" firstStartedPulling="2026-02-17 17:10:41.111277885 +0000 UTC m=+4051.405666446" lastFinishedPulling="2026-02-17 17:10:41.516042105 +0000 UTC m=+4051.810430666" observedRunningTime="2026-02-17 17:10:42.174370443 +0000 UTC m=+4052.468759004" watchObservedRunningTime="2026-02-17 17:10:42.179345957 +0000 UTC m=+4052.473734518" Feb 17 17:10:49 crc kubenswrapper[4874]: I0217 17:10:49.457052 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:10:49 crc kubenswrapper[4874]: E0217 17:10:49.458172 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:10:53 crc kubenswrapper[4874]: E0217 17:10:53.461281 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:10:54 crc kubenswrapper[4874]: E0217 17:10:54.459768 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:11:04 crc kubenswrapper[4874]: I0217 17:11:04.457237 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:11:04 crc kubenswrapper[4874]: E0217 17:11:04.457989 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:11:05 crc kubenswrapper[4874]: E0217 17:11:05.459396 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:11:08 crc kubenswrapper[4874]: E0217 17:11:08.459599 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:11:16 crc kubenswrapper[4874]: I0217 17:11:16.458371 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:11:16 crc kubenswrapper[4874]: E0217 17:11:16.459381 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:11:20 crc kubenswrapper[4874]: E0217 17:11:20.466877 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:11:23 crc kubenswrapper[4874]: E0217 17:11:23.459493 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:11:28 crc kubenswrapper[4874]: I0217 17:11:28.458284 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:11:28 crc kubenswrapper[4874]: E0217 17:11:28.460201 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:11:33 crc kubenswrapper[4874]: E0217 17:11:33.460196 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:11:38 crc kubenswrapper[4874]: E0217 17:11:38.459944 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:11:42 crc kubenswrapper[4874]: I0217 17:11:42.457749 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:11:42 crc kubenswrapper[4874]: E0217 17:11:42.459611 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:11:48 crc kubenswrapper[4874]: E0217 17:11:48.460031 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:11:52 crc kubenswrapper[4874]: E0217 17:11:52.460305 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:11:57 crc kubenswrapper[4874]: I0217 17:11:57.457919 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:11:57 crc kubenswrapper[4874]: E0217 17:11:57.458842 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:12:02 crc kubenswrapper[4874]: E0217 17:12:02.459150 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:12:05 crc kubenswrapper[4874]: E0217 17:12:05.461439 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:12:12 crc kubenswrapper[4874]: I0217 17:12:12.458630 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:12:13 crc kubenswrapper[4874]: I0217 17:12:13.128055 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"320e1274d23336488261677302e04e3092a84dcf44ec4b9fe376a0a58ba24575"} Feb 17 17:12:15 crc kubenswrapper[4874]: E0217 17:12:15.460256 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:12:16 crc kubenswrapper[4874]: E0217 17:12:16.462069 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.069100 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dlwml"] Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.071848 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.097331 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dlwml"] Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.204714 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8xtz\" (UniqueName: \"kubernetes.io/projected/f51b0964-7399-4167-843a-fc0e07a6eebb-kube-api-access-p8xtz\") pod \"redhat-operators-dlwml\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.204895 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-catalog-content\") pod \"redhat-operators-dlwml\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.205094 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-utilities\") pod \"redhat-operators-dlwml\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.306840 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8xtz\" (UniqueName: \"kubernetes.io/projected/f51b0964-7399-4167-843a-fc0e07a6eebb-kube-api-access-p8xtz\") pod \"redhat-operators-dlwml\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.306936 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-catalog-content\") pod \"redhat-operators-dlwml\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.307021 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-utilities\") pod \"redhat-operators-dlwml\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.307603 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-utilities\") pod \"redhat-operators-dlwml\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.307825 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-catalog-content\") pod \"redhat-operators-dlwml\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.438357 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8xtz\" (UniqueName: \"kubernetes.io/projected/f51b0964-7399-4167-843a-fc0e07a6eebb-kube-api-access-p8xtz\") pod \"redhat-operators-dlwml\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:26 crc kubenswrapper[4874]: I0217 17:12:26.460112 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:27 crc kubenswrapper[4874]: I0217 17:12:27.021911 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dlwml"] Feb 17 17:12:27 crc kubenswrapper[4874]: I0217 17:12:27.300028 4874 generic.go:334] "Generic (PLEG): container finished" podID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerID="96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21" exitCode=0 Feb 17 17:12:27 crc kubenswrapper[4874]: I0217 17:12:27.300134 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlwml" event={"ID":"f51b0964-7399-4167-843a-fc0e07a6eebb","Type":"ContainerDied","Data":"96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21"} Feb 17 17:12:27 crc kubenswrapper[4874]: I0217 17:12:27.300335 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlwml" event={"ID":"f51b0964-7399-4167-843a-fc0e07a6eebb","Type":"ContainerStarted","Data":"fd9f28c1f37c5d6e9546518ed4f8ad5bfaffa029d9bde41f7a412f26fe8b1603"} Feb 17 17:12:27 crc kubenswrapper[4874]: E0217 17:12:27.458711 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:12:29 crc kubenswrapper[4874]: I0217 17:12:29.327112 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlwml" event={"ID":"f51b0964-7399-4167-843a-fc0e07a6eebb","Type":"ContainerStarted","Data":"dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169"} Feb 17 17:12:29 crc kubenswrapper[4874]: E0217 17:12:29.459364 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:12:34 crc kubenswrapper[4874]: I0217 17:12:34.387212 4874 generic.go:334] "Generic (PLEG): container finished" podID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerID="dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169" exitCode=0 Feb 17 17:12:34 crc kubenswrapper[4874]: I0217 17:12:34.387310 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlwml" event={"ID":"f51b0964-7399-4167-843a-fc0e07a6eebb","Type":"ContainerDied","Data":"dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169"} Feb 17 17:12:35 crc kubenswrapper[4874]: I0217 17:12:35.400036 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlwml" event={"ID":"f51b0964-7399-4167-843a-fc0e07a6eebb","Type":"ContainerStarted","Data":"03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7"} Feb 17 17:12:36 crc kubenswrapper[4874]: I0217 17:12:36.490208 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:36 crc kubenswrapper[4874]: I0217 17:12:36.490553 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:37 crc kubenswrapper[4874]: I0217 17:12:37.558067 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dlwml" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerName="registry-server" probeResult="failure" output=< Feb 17 17:12:37 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 17:12:37 crc kubenswrapper[4874]: > Feb 17 17:12:41 crc kubenswrapper[4874]: E0217 17:12:41.460493 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:12:41 crc kubenswrapper[4874]: E0217 17:12:41.460506 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:12:46 crc kubenswrapper[4874]: I0217 17:12:46.510948 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:46 crc kubenswrapper[4874]: I0217 17:12:46.542778 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dlwml" podStartSLOduration=13.039085807 podStartE2EDuration="20.542759839s" podCreationTimestamp="2026-02-17 17:12:26 +0000 UTC" firstStartedPulling="2026-02-17 17:12:27.304005193 +0000 UTC m=+4157.598393754" lastFinishedPulling="2026-02-17 17:12:34.807679225 +0000 UTC m=+4165.102067786" observedRunningTime="2026-02-17 17:12:35.425871989 +0000 UTC m=+4165.720260540" watchObservedRunningTime="2026-02-17 17:12:46.542759839 +0000 UTC m=+4176.837148400" Feb 17 17:12:46 crc kubenswrapper[4874]: I0217 17:12:46.575472 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:46 crc kubenswrapper[4874]: I0217 17:12:46.760162 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dlwml"] Feb 17 17:12:48 crc kubenswrapper[4874]: I0217 17:12:48.541577 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dlwml" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerName="registry-server" containerID="cri-o://03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7" gracePeriod=2 Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.014911 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.191086 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-catalog-content\") pod \"f51b0964-7399-4167-843a-fc0e07a6eebb\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.191155 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-utilities\") pod \"f51b0964-7399-4167-843a-fc0e07a6eebb\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.191401 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8xtz\" (UniqueName: \"kubernetes.io/projected/f51b0964-7399-4167-843a-fc0e07a6eebb-kube-api-access-p8xtz\") pod \"f51b0964-7399-4167-843a-fc0e07a6eebb\" (UID: \"f51b0964-7399-4167-843a-fc0e07a6eebb\") " Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.192290 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-utilities" (OuterVolumeSpecName: "utilities") pod "f51b0964-7399-4167-843a-fc0e07a6eebb" (UID: "f51b0964-7399-4167-843a-fc0e07a6eebb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.199226 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51b0964-7399-4167-843a-fc0e07a6eebb-kube-api-access-p8xtz" (OuterVolumeSpecName: "kube-api-access-p8xtz") pod "f51b0964-7399-4167-843a-fc0e07a6eebb" (UID: "f51b0964-7399-4167-843a-fc0e07a6eebb"). InnerVolumeSpecName "kube-api-access-p8xtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.295188 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8xtz\" (UniqueName: \"kubernetes.io/projected/f51b0964-7399-4167-843a-fc0e07a6eebb-kube-api-access-p8xtz\") on node \"crc\" DevicePath \"\"" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.295229 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.311443 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f51b0964-7399-4167-843a-fc0e07a6eebb" (UID: "f51b0964-7399-4167-843a-fc0e07a6eebb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.399207 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51b0964-7399-4167-843a-fc0e07a6eebb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.553563 4874 generic.go:334] "Generic (PLEG): container finished" podID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerID="03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7" exitCode=0 Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.553614 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlwml" event={"ID":"f51b0964-7399-4167-843a-fc0e07a6eebb","Type":"ContainerDied","Data":"03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7"} Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.553643 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlwml" event={"ID":"f51b0964-7399-4167-843a-fc0e07a6eebb","Type":"ContainerDied","Data":"fd9f28c1f37c5d6e9546518ed4f8ad5bfaffa029d9bde41f7a412f26fe8b1603"} Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.553660 4874 scope.go:117] "RemoveContainer" containerID="03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.553792 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlwml" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.583548 4874 scope.go:117] "RemoveContainer" containerID="dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.596677 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dlwml"] Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.606536 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dlwml"] Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.621146 4874 scope.go:117] "RemoveContainer" containerID="96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.662796 4874 scope.go:117] "RemoveContainer" containerID="03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7" Feb 17 17:12:49 crc kubenswrapper[4874]: E0217 17:12:49.663391 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7\": container with ID starting with 03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7 not found: ID does not exist" containerID="03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.663427 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7"} err="failed to get container status \"03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7\": rpc error: code = NotFound desc = could not find container \"03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7\": container with ID starting with 03e00f5119acadef6b9b8e90698156c5ff054d6961ab664904ba1f919052cca7 not found: ID does not exist" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.663456 4874 scope.go:117] "RemoveContainer" containerID="dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169" Feb 17 17:12:49 crc kubenswrapper[4874]: E0217 17:12:49.663821 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169\": container with ID starting with dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169 not found: ID does not exist" containerID="dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.663850 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169"} err="failed to get container status \"dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169\": rpc error: code = NotFound desc = could not find container \"dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169\": container with ID starting with dd815a6a82de7480ccb98695838582076ea42f312b03e5355047675f7138e169 not found: ID does not exist" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.663866 4874 scope.go:117] "RemoveContainer" containerID="96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21" Feb 17 17:12:49 crc kubenswrapper[4874]: E0217 17:12:49.664327 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21\": container with ID starting with 96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21 not found: ID does not exist" containerID="96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21" Feb 17 17:12:49 crc kubenswrapper[4874]: I0217 17:12:49.664351 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21"} err="failed to get container status \"96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21\": rpc error: code = NotFound desc = could not find container \"96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21\": container with ID starting with 96383fffee780278f2b5f4b00b554ae35a8efcf7f45ee4dc64b9e9d1bbba0e21 not found: ID does not exist" Feb 17 17:12:50 crc kubenswrapper[4874]: I0217 17:12:50.474852 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" path="/var/lib/kubelet/pods/f51b0964-7399-4167-843a-fc0e07a6eebb/volumes" Feb 17 17:12:53 crc kubenswrapper[4874]: E0217 17:12:53.467556 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:12:54 crc kubenswrapper[4874]: E0217 17:12:54.459409 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:13:07 crc kubenswrapper[4874]: E0217 17:13:07.459510 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:13:08 crc kubenswrapper[4874]: E0217 17:13:08.461057 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:13:19 crc kubenswrapper[4874]: E0217 17:13:19.459365 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:13:19 crc kubenswrapper[4874]: E0217 17:13:19.459501 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:13:33 crc kubenswrapper[4874]: E0217 17:13:33.460300 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:13:33 crc kubenswrapper[4874]: E0217 17:13:33.460411 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:13:46 crc kubenswrapper[4874]: E0217 17:13:46.460208 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:13:47 crc kubenswrapper[4874]: E0217 17:13:47.459400 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:14:00 crc kubenswrapper[4874]: E0217 17:14:00.470350 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:14:01 crc kubenswrapper[4874]: E0217 17:14:01.458842 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:14:12 crc kubenswrapper[4874]: E0217 17:14:12.461253 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:14:16 crc kubenswrapper[4874]: E0217 17:14:16.463263 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:14:26 crc kubenswrapper[4874]: E0217 17:14:26.459714 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:14:27 crc kubenswrapper[4874]: I0217 17:14:27.724772 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:14:27 crc kubenswrapper[4874]: I0217 17:14:27.725123 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:14:28 crc kubenswrapper[4874]: E0217 17:14:28.458693 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:14:40 crc kubenswrapper[4874]: E0217 17:14:40.471749 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:14:43 crc kubenswrapper[4874]: E0217 17:14:43.460331 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.757758 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wns88"] Feb 17 17:14:47 crc kubenswrapper[4874]: E0217 17:14:47.758987 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerName="extract-utilities" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.759005 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerName="extract-utilities" Feb 17 17:14:47 crc kubenswrapper[4874]: E0217 17:14:47.759084 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerName="registry-server" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.759094 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerName="registry-server" Feb 17 17:14:47 crc kubenswrapper[4874]: E0217 17:14:47.759132 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerName="extract-content" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.759141 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerName="extract-content" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.759505 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51b0964-7399-4167-843a-fc0e07a6eebb" containerName="registry-server" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.761751 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.774757 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wns88"] Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.797623 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fdz\" (UniqueName: \"kubernetes.io/projected/69005551-84e0-4922-aa8f-6dc530cc4160-kube-api-access-r5fdz\") pod \"community-operators-wns88\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.797757 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-catalog-content\") pod \"community-operators-wns88\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.797788 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-utilities\") pod \"community-operators-wns88\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.900201 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fdz\" (UniqueName: \"kubernetes.io/projected/69005551-84e0-4922-aa8f-6dc530cc4160-kube-api-access-r5fdz\") pod \"community-operators-wns88\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.900560 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-catalog-content\") pod \"community-operators-wns88\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.900675 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-utilities\") pod \"community-operators-wns88\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.901018 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-catalog-content\") pod \"community-operators-wns88\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.901328 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-utilities\") pod \"community-operators-wns88\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:47 crc kubenswrapper[4874]: I0217 17:14:47.929672 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fdz\" (UniqueName: \"kubernetes.io/projected/69005551-84e0-4922-aa8f-6dc530cc4160-kube-api-access-r5fdz\") pod \"community-operators-wns88\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:48 crc kubenswrapper[4874]: I0217 17:14:48.087260 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:48 crc kubenswrapper[4874]: I0217 17:14:48.705676 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wns88"] Feb 17 17:14:48 crc kubenswrapper[4874]: I0217 17:14:48.855381 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wns88" event={"ID":"69005551-84e0-4922-aa8f-6dc530cc4160","Type":"ContainerStarted","Data":"aaac66b440d694c7a49a5421e839cbc0be4cc7cfa2f465fab3faefc4da95aaa1"} Feb 17 17:14:49 crc kubenswrapper[4874]: I0217 17:14:49.869630 4874 generic.go:334] "Generic (PLEG): container finished" podID="69005551-84e0-4922-aa8f-6dc530cc4160" containerID="2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782" exitCode=0 Feb 17 17:14:49 crc kubenswrapper[4874]: I0217 17:14:49.869905 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wns88" event={"ID":"69005551-84e0-4922-aa8f-6dc530cc4160","Type":"ContainerDied","Data":"2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782"} Feb 17 17:14:49 crc kubenswrapper[4874]: I0217 17:14:49.872613 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:14:50 crc kubenswrapper[4874]: I0217 17:14:50.881584 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wns88" event={"ID":"69005551-84e0-4922-aa8f-6dc530cc4160","Type":"ContainerStarted","Data":"5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6"} Feb 17 17:14:51 crc kubenswrapper[4874]: I0217 17:14:51.897670 4874 generic.go:334] "Generic (PLEG): container finished" podID="69005551-84e0-4922-aa8f-6dc530cc4160" containerID="5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6" exitCode=0 Feb 17 17:14:51 crc kubenswrapper[4874]: I0217 17:14:51.897772 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wns88" event={"ID":"69005551-84e0-4922-aa8f-6dc530cc4160","Type":"ContainerDied","Data":"5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6"} Feb 17 17:14:52 crc kubenswrapper[4874]: I0217 17:14:52.911284 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wns88" event={"ID":"69005551-84e0-4922-aa8f-6dc530cc4160","Type":"ContainerStarted","Data":"cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763"} Feb 17 17:14:52 crc kubenswrapper[4874]: I0217 17:14:52.941531 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wns88" podStartSLOduration=3.512833926 podStartE2EDuration="5.941509471s" podCreationTimestamp="2026-02-17 17:14:47 +0000 UTC" firstStartedPulling="2026-02-17 17:14:49.872359351 +0000 UTC m=+4300.166747912" lastFinishedPulling="2026-02-17 17:14:52.301034896 +0000 UTC m=+4302.595423457" observedRunningTime="2026-02-17 17:14:52.931337719 +0000 UTC m=+4303.225726280" watchObservedRunningTime="2026-02-17 17:14:52.941509471 +0000 UTC m=+4303.235898032" Feb 17 17:14:54 crc kubenswrapper[4874]: E0217 17:14:54.595006 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:14:54 crc kubenswrapper[4874]: E0217 17:14:54.595368 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:14:54 crc kubenswrapper[4874]: E0217 17:14:54.595504 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:14:54 crc kubenswrapper[4874]: E0217 17:14:54.596690 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:14:55 crc kubenswrapper[4874]: E0217 17:14:55.458641 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:14:57 crc kubenswrapper[4874]: I0217 17:14:57.725026 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:14:57 crc kubenswrapper[4874]: I0217 17:14:57.725436 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:14:58 crc kubenswrapper[4874]: I0217 17:14:58.088422 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:58 crc kubenswrapper[4874]: I0217 17:14:58.088488 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:58 crc kubenswrapper[4874]: I0217 17:14:58.143011 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:59 crc kubenswrapper[4874]: I0217 17:14:59.012652 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wns88" Feb 17 17:14:59 crc kubenswrapper[4874]: I0217 17:14:59.069236 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wns88"] Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.168057 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn"] Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.171214 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.174013 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.174300 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.191903 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn"] Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.221236 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gzhj\" (UniqueName: \"kubernetes.io/projected/d7a900a3-712b-4cdf-b224-7a605ce0053b-kube-api-access-5gzhj\") pod \"collect-profiles-29522475-t5dpn\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.221496 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a900a3-712b-4cdf-b224-7a605ce0053b-config-volume\") pod \"collect-profiles-29522475-t5dpn\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.222400 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a900a3-712b-4cdf-b224-7a605ce0053b-secret-volume\") pod \"collect-profiles-29522475-t5dpn\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.325890 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a900a3-712b-4cdf-b224-7a605ce0053b-secret-volume\") pod \"collect-profiles-29522475-t5dpn\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.326361 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gzhj\" (UniqueName: \"kubernetes.io/projected/d7a900a3-712b-4cdf-b224-7a605ce0053b-kube-api-access-5gzhj\") pod \"collect-profiles-29522475-t5dpn\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.326588 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a900a3-712b-4cdf-b224-7a605ce0053b-config-volume\") pod \"collect-profiles-29522475-t5dpn\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.327740 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a900a3-712b-4cdf-b224-7a605ce0053b-config-volume\") pod \"collect-profiles-29522475-t5dpn\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.331526 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a900a3-712b-4cdf-b224-7a605ce0053b-secret-volume\") pod \"collect-profiles-29522475-t5dpn\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.344470 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gzhj\" (UniqueName: \"kubernetes.io/projected/d7a900a3-712b-4cdf-b224-7a605ce0053b-kube-api-access-5gzhj\") pod \"collect-profiles-29522475-t5dpn\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.495682 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:00 crc kubenswrapper[4874]: I0217 17:15:00.983191 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wns88" podUID="69005551-84e0-4922-aa8f-6dc530cc4160" containerName="registry-server" containerID="cri-o://cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763" gracePeriod=2 Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.058559 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn"] Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.468394 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wns88" Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.557960 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-catalog-content\") pod \"69005551-84e0-4922-aa8f-6dc530cc4160\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.558045 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-utilities\") pod \"69005551-84e0-4922-aa8f-6dc530cc4160\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.559034 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-utilities" (OuterVolumeSpecName: "utilities") pod "69005551-84e0-4922-aa8f-6dc530cc4160" (UID: "69005551-84e0-4922-aa8f-6dc530cc4160"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.559262 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5fdz\" (UniqueName: \"kubernetes.io/projected/69005551-84e0-4922-aa8f-6dc530cc4160-kube-api-access-r5fdz\") pod \"69005551-84e0-4922-aa8f-6dc530cc4160\" (UID: \"69005551-84e0-4922-aa8f-6dc530cc4160\") " Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.560832 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.565567 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69005551-84e0-4922-aa8f-6dc530cc4160-kube-api-access-r5fdz" (OuterVolumeSpecName: "kube-api-access-r5fdz") pod "69005551-84e0-4922-aa8f-6dc530cc4160" (UID: "69005551-84e0-4922-aa8f-6dc530cc4160"). InnerVolumeSpecName "kube-api-access-r5fdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.628808 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69005551-84e0-4922-aa8f-6dc530cc4160" (UID: "69005551-84e0-4922-aa8f-6dc530cc4160"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.662802 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5fdz\" (UniqueName: \"kubernetes.io/projected/69005551-84e0-4922-aa8f-6dc530cc4160-kube-api-access-r5fdz\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.662840 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69005551-84e0-4922-aa8f-6dc530cc4160-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.994767 4874 generic.go:334] "Generic (PLEG): container finished" podID="d7a900a3-712b-4cdf-b224-7a605ce0053b" containerID="0d3ff20e384a52f6ad6f810d0b4acfb316b253eee2865998707bd1292146acee" exitCode=0 Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.994918 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" event={"ID":"d7a900a3-712b-4cdf-b224-7a605ce0053b","Type":"ContainerDied","Data":"0d3ff20e384a52f6ad6f810d0b4acfb316b253eee2865998707bd1292146acee"} Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.995132 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" event={"ID":"d7a900a3-712b-4cdf-b224-7a605ce0053b","Type":"ContainerStarted","Data":"1acd30037a3a00e142a678a99f92892c7f79fd28c34df4336b8f94db1ea85bcf"} Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.998232 4874 generic.go:334] "Generic (PLEG): container finished" podID="69005551-84e0-4922-aa8f-6dc530cc4160" containerID="cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763" exitCode=0 Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.998301 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wns88" Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.998309 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wns88" event={"ID":"69005551-84e0-4922-aa8f-6dc530cc4160","Type":"ContainerDied","Data":"cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763"} Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.998373 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wns88" event={"ID":"69005551-84e0-4922-aa8f-6dc530cc4160","Type":"ContainerDied","Data":"aaac66b440d694c7a49a5421e839cbc0be4cc7cfa2f465fab3faefc4da95aaa1"} Feb 17 17:15:01 crc kubenswrapper[4874]: I0217 17:15:01.998397 4874 scope.go:117] "RemoveContainer" containerID="cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763" Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.022183 4874 scope.go:117] "RemoveContainer" containerID="5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6" Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.039594 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wns88"] Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.051900 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wns88"] Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.074237 4874 scope.go:117] "RemoveContainer" containerID="2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782" Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.112722 4874 scope.go:117] "RemoveContainer" containerID="cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763" Feb 17 17:15:02 crc kubenswrapper[4874]: E0217 17:15:02.113103 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763\": container with ID starting with cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763 not found: ID does not exist" containerID="cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763" Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.113211 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763"} err="failed to get container status \"cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763\": rpc error: code = NotFound desc = could not find container \"cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763\": container with ID starting with cb09d1ee45509c0f5592bf52ee7e559e1658ee0ed66f9450a48a4a83c72b7763 not found: ID does not exist" Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.113297 4874 scope.go:117] "RemoveContainer" containerID="5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6" Feb 17 17:15:02 crc kubenswrapper[4874]: E0217 17:15:02.113675 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6\": container with ID starting with 5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6 not found: ID does not exist" containerID="5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6" Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.113722 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6"} err="failed to get container status \"5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6\": rpc error: code = NotFound desc = could not find container \"5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6\": container with ID starting with 5a9d384e957a1b86cd7051427f853552725d62ef4fb9db1e29bf8957ed072ee6 not found: ID does not exist" Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.113752 4874 scope.go:117] "RemoveContainer" containerID="2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782" Feb 17 17:15:02 crc kubenswrapper[4874]: E0217 17:15:02.114062 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782\": container with ID starting with 2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782 not found: ID does not exist" containerID="2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782" Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.114118 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782"} err="failed to get container status \"2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782\": rpc error: code = NotFound desc = could not find container \"2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782\": container with ID starting with 2a0b19256abc7429ed9d710e17476271276ea75a645779b8429c1e166218e782 not found: ID does not exist" Feb 17 17:15:02 crc kubenswrapper[4874]: I0217 17:15:02.470634 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69005551-84e0-4922-aa8f-6dc530cc4160" path="/var/lib/kubelet/pods/69005551-84e0-4922-aa8f-6dc530cc4160/volumes" Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.483368 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.507967 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a900a3-712b-4cdf-b224-7a605ce0053b-config-volume\") pod \"d7a900a3-712b-4cdf-b224-7a605ce0053b\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.508177 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gzhj\" (UniqueName: \"kubernetes.io/projected/d7a900a3-712b-4cdf-b224-7a605ce0053b-kube-api-access-5gzhj\") pod \"d7a900a3-712b-4cdf-b224-7a605ce0053b\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.508416 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a900a3-712b-4cdf-b224-7a605ce0053b-secret-volume\") pod \"d7a900a3-712b-4cdf-b224-7a605ce0053b\" (UID: \"d7a900a3-712b-4cdf-b224-7a605ce0053b\") " Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.509717 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a900a3-712b-4cdf-b224-7a605ce0053b-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7a900a3-712b-4cdf-b224-7a605ce0053b" (UID: "d7a900a3-712b-4cdf-b224-7a605ce0053b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.520689 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a900a3-712b-4cdf-b224-7a605ce0053b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7a900a3-712b-4cdf-b224-7a605ce0053b" (UID: "d7a900a3-712b-4cdf-b224-7a605ce0053b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.526363 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a900a3-712b-4cdf-b224-7a605ce0053b-kube-api-access-5gzhj" (OuterVolumeSpecName: "kube-api-access-5gzhj") pod "d7a900a3-712b-4cdf-b224-7a605ce0053b" (UID: "d7a900a3-712b-4cdf-b224-7a605ce0053b"). InnerVolumeSpecName "kube-api-access-5gzhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.612162 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7a900a3-712b-4cdf-b224-7a605ce0053b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.612204 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7a900a3-712b-4cdf-b224-7a605ce0053b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:03 crc kubenswrapper[4874]: I0217 17:15:03.612215 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gzhj\" (UniqueName: \"kubernetes.io/projected/d7a900a3-712b-4cdf-b224-7a605ce0053b-kube-api-access-5gzhj\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:04 crc kubenswrapper[4874]: I0217 17:15:04.021636 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" event={"ID":"d7a900a3-712b-4cdf-b224-7a605ce0053b","Type":"ContainerDied","Data":"1acd30037a3a00e142a678a99f92892c7f79fd28c34df4336b8f94db1ea85bcf"} Feb 17 17:15:04 crc kubenswrapper[4874]: I0217 17:15:04.021678 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1acd30037a3a00e142a678a99f92892c7f79fd28c34df4336b8f94db1ea85bcf" Feb 17 17:15:04 crc kubenswrapper[4874]: I0217 17:15:04.021731 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-t5dpn" Feb 17 17:15:04 crc kubenswrapper[4874]: I0217 17:15:04.569456 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz"] Feb 17 17:15:04 crc kubenswrapper[4874]: I0217 17:15:04.585996 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-pfscz"] Feb 17 17:15:05 crc kubenswrapper[4874]: E0217 17:15:05.461300 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:15:06 crc kubenswrapper[4874]: I0217 17:15:06.475798 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd" path="/var/lib/kubelet/pods/7e2f4557-1aac-4ca6-91a4-f1632b6b2dbd/volumes" Feb 17 17:15:10 crc kubenswrapper[4874]: E0217 17:15:10.591927 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:15:10 crc kubenswrapper[4874]: E0217 17:15:10.592264 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:15:10 crc kubenswrapper[4874]: E0217 17:15:10.592386 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:15:10 crc kubenswrapper[4874]: E0217 17:15:10.594185 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:15:19 crc kubenswrapper[4874]: E0217 17:15:19.460273 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:15:24 crc kubenswrapper[4874]: E0217 17:15:24.459206 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:15:27 crc kubenswrapper[4874]: I0217 17:15:27.725218 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:15:27 crc kubenswrapper[4874]: I0217 17:15:27.725840 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:15:27 crc kubenswrapper[4874]: I0217 17:15:27.725905 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 17:15:27 crc kubenswrapper[4874]: I0217 17:15:27.727273 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"320e1274d23336488261677302e04e3092a84dcf44ec4b9fe376a0a58ba24575"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:15:27 crc kubenswrapper[4874]: I0217 17:15:27.727355 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://320e1274d23336488261677302e04e3092a84dcf44ec4b9fe376a0a58ba24575" gracePeriod=600 Feb 17 17:15:28 crc kubenswrapper[4874]: I0217 17:15:28.270914 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="320e1274d23336488261677302e04e3092a84dcf44ec4b9fe376a0a58ba24575" exitCode=0 Feb 17 17:15:28 crc kubenswrapper[4874]: I0217 17:15:28.271217 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"320e1274d23336488261677302e04e3092a84dcf44ec4b9fe376a0a58ba24575"} Feb 17 17:15:28 crc kubenswrapper[4874]: I0217 17:15:28.271245 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930"} Feb 17 17:15:28 crc kubenswrapper[4874]: I0217 17:15:28.271262 4874 scope.go:117] "RemoveContainer" containerID="af26f937ef4337232056dc6a25d3e35399d36e90ff25e903f5e98c2da9cfdecc" Feb 17 17:15:34 crc kubenswrapper[4874]: E0217 17:15:34.459607 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:15:38 crc kubenswrapper[4874]: E0217 17:15:38.460902 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:15:47 crc kubenswrapper[4874]: E0217 17:15:47.476941 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:15:51 crc kubenswrapper[4874]: I0217 17:15:51.329371 4874 scope.go:117] "RemoveContainer" containerID="1e776a6b2479a83db921657fb3fcd42f6b6d24385c5597ba8802b70192a305a3" Feb 17 17:15:51 crc kubenswrapper[4874]: I0217 17:15:51.357612 4874 scope.go:117] "RemoveContainer" containerID="7ba0f9d1978d7131f6f8c4101c8b1f547968547e8bd668f473d0d32eb7ab56a9" Feb 17 17:15:51 crc kubenswrapper[4874]: I0217 17:15:51.382043 4874 scope.go:117] "RemoveContainer" containerID="96f55cd80a8d2c2ae0ade72e42fccc20b678966172bae5b9d1288e4f15b43b17" Feb 17 17:15:51 crc kubenswrapper[4874]: I0217 17:15:51.446125 4874 scope.go:117] "RemoveContainer" containerID="225f638a14b6136b1e764d41255b2dd4b91adefc0abbc2e1418dfeab2b04a460" Feb 17 17:15:53 crc kubenswrapper[4874]: E0217 17:15:53.459597 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:16:01 crc kubenswrapper[4874]: E0217 17:16:01.460093 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:16:07 crc kubenswrapper[4874]: E0217 17:16:07.459479 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:16:12 crc kubenswrapper[4874]: E0217 17:16:12.461068 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:16:19 crc kubenswrapper[4874]: E0217 17:16:19.460713 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:16:24 crc kubenswrapper[4874]: E0217 17:16:24.460506 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:16:31 crc kubenswrapper[4874]: E0217 17:16:31.459968 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:16:39 crc kubenswrapper[4874]: E0217 17:16:39.460253 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:16:43 crc kubenswrapper[4874]: E0217 17:16:43.459653 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:16:50 crc kubenswrapper[4874]: E0217 17:16:50.471013 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:16:56 crc kubenswrapper[4874]: E0217 17:16:56.460358 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:16:58 crc kubenswrapper[4874]: E0217 17:16:58.859315 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7489caf0_d625_4d40_829f_34558a80ad7a.slice/crio-conmon-17d661093107fd53a1fe08533af7dc3229a52d2c22a31ed23c3233772a0e0422.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:16:59 crc kubenswrapper[4874]: I0217 17:16:59.276507 4874 generic.go:334] "Generic (PLEG): container finished" podID="7489caf0-d625-4d40-829f-34558a80ad7a" containerID="17d661093107fd53a1fe08533af7dc3229a52d2c22a31ed23c3233772a0e0422" exitCode=2 Feb 17 17:16:59 crc kubenswrapper[4874]: I0217 17:16:59.276598 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" event={"ID":"7489caf0-d625-4d40-829f-34558a80ad7a","Type":"ContainerDied","Data":"17d661093107fd53a1fe08533af7dc3229a52d2c22a31ed23c3233772a0e0422"} Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.820364 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.867266 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnrm2\" (UniqueName: \"kubernetes.io/projected/7489caf0-d625-4d40-829f-34558a80ad7a-kube-api-access-cnrm2\") pod \"7489caf0-d625-4d40-829f-34558a80ad7a\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.867724 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-ssh-key-openstack-edpm-ipam\") pod \"7489caf0-d625-4d40-829f-34558a80ad7a\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.867820 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-inventory\") pod \"7489caf0-d625-4d40-829f-34558a80ad7a\" (UID: \"7489caf0-d625-4d40-829f-34558a80ad7a\") " Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.877540 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7489caf0-d625-4d40-829f-34558a80ad7a-kube-api-access-cnrm2" (OuterVolumeSpecName: "kube-api-access-cnrm2") pod "7489caf0-d625-4d40-829f-34558a80ad7a" (UID: "7489caf0-d625-4d40-829f-34558a80ad7a"). InnerVolumeSpecName "kube-api-access-cnrm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.906239 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7489caf0-d625-4d40-829f-34558a80ad7a" (UID: "7489caf0-d625-4d40-829f-34558a80ad7a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.926246 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-inventory" (OuterVolumeSpecName: "inventory") pod "7489caf0-d625-4d40-829f-34558a80ad7a" (UID: "7489caf0-d625-4d40-829f-34558a80ad7a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.969497 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.969901 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7489caf0-d625-4d40-829f-34558a80ad7a-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:00 crc kubenswrapper[4874]: I0217 17:17:00.969916 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnrm2\" (UniqueName: \"kubernetes.io/projected/7489caf0-d625-4d40-829f-34558a80ad7a-kube-api-access-cnrm2\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:01 crc kubenswrapper[4874]: I0217 17:17:01.299192 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" event={"ID":"7489caf0-d625-4d40-829f-34558a80ad7a","Type":"ContainerDied","Data":"2c8e05e87ef67166dace8185c31392a2fdadc34cefc669adfb7a97bd6dde388c"} Feb 17 17:17:01 crc kubenswrapper[4874]: I0217 17:17:01.299227 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8e05e87ef67166dace8185c31392a2fdadc34cefc669adfb7a97bd6dde388c" Feb 17 17:17:01 crc kubenswrapper[4874]: I0217 17:17:01.299279 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9" Feb 17 17:17:05 crc kubenswrapper[4874]: E0217 17:17:05.460604 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.888878 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qkjcm"] Feb 17 17:17:07 crc kubenswrapper[4874]: E0217 17:17:07.889766 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69005551-84e0-4922-aa8f-6dc530cc4160" containerName="extract-content" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.889785 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="69005551-84e0-4922-aa8f-6dc530cc4160" containerName="extract-content" Feb 17 17:17:07 crc kubenswrapper[4874]: E0217 17:17:07.889799 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69005551-84e0-4922-aa8f-6dc530cc4160" containerName="registry-server" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.889808 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="69005551-84e0-4922-aa8f-6dc530cc4160" containerName="registry-server" Feb 17 17:17:07 crc kubenswrapper[4874]: E0217 17:17:07.889828 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7489caf0-d625-4d40-829f-34558a80ad7a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.889838 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="7489caf0-d625-4d40-829f-34558a80ad7a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:17:07 crc kubenswrapper[4874]: E0217 17:17:07.889878 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a900a3-712b-4cdf-b224-7a605ce0053b" containerName="collect-profiles" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.889886 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a900a3-712b-4cdf-b224-7a605ce0053b" containerName="collect-profiles" Feb 17 17:17:07 crc kubenswrapper[4874]: E0217 17:17:07.889925 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69005551-84e0-4922-aa8f-6dc530cc4160" containerName="extract-utilities" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.889934 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="69005551-84e0-4922-aa8f-6dc530cc4160" containerName="extract-utilities" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.890238 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="69005551-84e0-4922-aa8f-6dc530cc4160" containerName="registry-server" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.890271 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="7489caf0-d625-4d40-829f-34558a80ad7a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.890295 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a900a3-712b-4cdf-b224-7a605ce0053b" containerName="collect-profiles" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.893994 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:07 crc kubenswrapper[4874]: I0217 17:17:07.915067 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkjcm"] Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.064811 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-catalog-content\") pod \"redhat-marketplace-qkjcm\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.064918 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfgd9\" (UniqueName: \"kubernetes.io/projected/c27dd992-4ada-47f2-b8eb-da2fee20b636-kube-api-access-jfgd9\") pod \"redhat-marketplace-qkjcm\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.065366 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-utilities\") pod \"redhat-marketplace-qkjcm\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.167810 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-utilities\") pod \"redhat-marketplace-qkjcm\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.168094 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-catalog-content\") pod \"redhat-marketplace-qkjcm\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.168150 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfgd9\" (UniqueName: \"kubernetes.io/projected/c27dd992-4ada-47f2-b8eb-da2fee20b636-kube-api-access-jfgd9\") pod \"redhat-marketplace-qkjcm\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.168415 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-utilities\") pod \"redhat-marketplace-qkjcm\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.168908 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-catalog-content\") pod \"redhat-marketplace-qkjcm\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.187615 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfgd9\" (UniqueName: \"kubernetes.io/projected/c27dd992-4ada-47f2-b8eb-da2fee20b636-kube-api-access-jfgd9\") pod \"redhat-marketplace-qkjcm\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.218788 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:08 crc kubenswrapper[4874]: I0217 17:17:08.696547 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkjcm"] Feb 17 17:17:09 crc kubenswrapper[4874]: E0217 17:17:09.464202 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:17:10 crc kubenswrapper[4874]: I0217 17:17:10.398317 4874 generic.go:334] "Generic (PLEG): container finished" podID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerID="e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e" exitCode=0 Feb 17 17:17:10 crc kubenswrapper[4874]: I0217 17:17:10.398437 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkjcm" event={"ID":"c27dd992-4ada-47f2-b8eb-da2fee20b636","Type":"ContainerDied","Data":"e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e"} Feb 17 17:17:10 crc kubenswrapper[4874]: I0217 17:17:10.398670 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkjcm" event={"ID":"c27dd992-4ada-47f2-b8eb-da2fee20b636","Type":"ContainerStarted","Data":"ba64416639da6736a8b620510264a05dc9dda085c7bed770d8f3a04e85b8d661"} Feb 17 17:17:12 crc kubenswrapper[4874]: I0217 17:17:12.421980 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkjcm" event={"ID":"c27dd992-4ada-47f2-b8eb-da2fee20b636","Type":"ContainerStarted","Data":"3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3"} Feb 17 17:17:13 crc kubenswrapper[4874]: I0217 17:17:13.432799 4874 generic.go:334] "Generic (PLEG): container finished" podID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerID="3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3" exitCode=0 Feb 17 17:17:13 crc kubenswrapper[4874]: I0217 17:17:13.432871 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkjcm" event={"ID":"c27dd992-4ada-47f2-b8eb-da2fee20b636","Type":"ContainerDied","Data":"3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3"} Feb 17 17:17:14 crc kubenswrapper[4874]: I0217 17:17:14.447065 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkjcm" event={"ID":"c27dd992-4ada-47f2-b8eb-da2fee20b636","Type":"ContainerStarted","Data":"562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29"} Feb 17 17:17:17 crc kubenswrapper[4874]: E0217 17:17:17.460805 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:17:18 crc kubenswrapper[4874]: I0217 17:17:18.219526 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:18 crc kubenswrapper[4874]: I0217 17:17:18.219853 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:18 crc kubenswrapper[4874]: I0217 17:17:18.272262 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:18 crc kubenswrapper[4874]: I0217 17:17:18.299667 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qkjcm" podStartSLOduration=7.890277715 podStartE2EDuration="11.299645758s" podCreationTimestamp="2026-02-17 17:17:07 +0000 UTC" firstStartedPulling="2026-02-17 17:17:10.400437033 +0000 UTC m=+4440.694825584" lastFinishedPulling="2026-02-17 17:17:13.809805076 +0000 UTC m=+4444.104193627" observedRunningTime="2026-02-17 17:17:14.477382083 +0000 UTC m=+4444.771770664" watchObservedRunningTime="2026-02-17 17:17:18.299645758 +0000 UTC m=+4448.594034339" Feb 17 17:17:20 crc kubenswrapper[4874]: E0217 17:17:20.468258 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:17:28 crc kubenswrapper[4874]: I0217 17:17:28.282642 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:28 crc kubenswrapper[4874]: I0217 17:17:28.338844 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkjcm"] Feb 17 17:17:28 crc kubenswrapper[4874]: E0217 17:17:28.460476 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:17:28 crc kubenswrapper[4874]: I0217 17:17:28.594481 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qkjcm" podUID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerName="registry-server" containerID="cri-o://562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29" gracePeriod=2 Feb 17 17:17:28 crc kubenswrapper[4874]: E0217 17:17:28.823686 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc27dd992_4ada_47f2_b8eb_da2fee20b636.slice/crio-562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.110466 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.188404 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-catalog-content\") pod \"c27dd992-4ada-47f2-b8eb-da2fee20b636\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.188669 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfgd9\" (UniqueName: \"kubernetes.io/projected/c27dd992-4ada-47f2-b8eb-da2fee20b636-kube-api-access-jfgd9\") pod \"c27dd992-4ada-47f2-b8eb-da2fee20b636\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.188689 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-utilities\") pod \"c27dd992-4ada-47f2-b8eb-da2fee20b636\" (UID: \"c27dd992-4ada-47f2-b8eb-da2fee20b636\") " Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.190159 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-utilities" (OuterVolumeSpecName: "utilities") pod "c27dd992-4ada-47f2-b8eb-da2fee20b636" (UID: "c27dd992-4ada-47f2-b8eb-da2fee20b636"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.195692 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27dd992-4ada-47f2-b8eb-da2fee20b636-kube-api-access-jfgd9" (OuterVolumeSpecName: "kube-api-access-jfgd9") pod "c27dd992-4ada-47f2-b8eb-da2fee20b636" (UID: "c27dd992-4ada-47f2-b8eb-da2fee20b636"). InnerVolumeSpecName "kube-api-access-jfgd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.212709 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c27dd992-4ada-47f2-b8eb-da2fee20b636" (UID: "c27dd992-4ada-47f2-b8eb-da2fee20b636"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.292798 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.292899 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfgd9\" (UniqueName: \"kubernetes.io/projected/c27dd992-4ada-47f2-b8eb-da2fee20b636-kube-api-access-jfgd9\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.292922 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27dd992-4ada-47f2-b8eb-da2fee20b636-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.608679 4874 generic.go:334] "Generic (PLEG): container finished" podID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerID="562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29" exitCode=0 Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.608727 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkjcm" event={"ID":"c27dd992-4ada-47f2-b8eb-da2fee20b636","Type":"ContainerDied","Data":"562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29"} Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.608742 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkjcm" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.608759 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkjcm" event={"ID":"c27dd992-4ada-47f2-b8eb-da2fee20b636","Type":"ContainerDied","Data":"ba64416639da6736a8b620510264a05dc9dda085c7bed770d8f3a04e85b8d661"} Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.608780 4874 scope.go:117] "RemoveContainer" containerID="562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.641823 4874 scope.go:117] "RemoveContainer" containerID="3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.650092 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkjcm"] Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.660416 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkjcm"] Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.686682 4874 scope.go:117] "RemoveContainer" containerID="e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.730702 4874 scope.go:117] "RemoveContainer" containerID="562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29" Feb 17 17:17:29 crc kubenswrapper[4874]: E0217 17:17:29.731329 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29\": container with ID starting with 562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29 not found: ID does not exist" containerID="562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.731497 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29"} err="failed to get container status \"562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29\": rpc error: code = NotFound desc = could not find container \"562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29\": container with ID starting with 562267e6bfd8d16875134f2eeeabbe883a30301847103d3b91ad62e31bcdfd29 not found: ID does not exist" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.731538 4874 scope.go:117] "RemoveContainer" containerID="3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3" Feb 17 17:17:29 crc kubenswrapper[4874]: E0217 17:17:29.732068 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3\": container with ID starting with 3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3 not found: ID does not exist" containerID="3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.732118 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3"} err="failed to get container status \"3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3\": rpc error: code = NotFound desc = could not find container \"3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3\": container with ID starting with 3bc694f249f8bd5386ce81c596083c45dccda9c58b4b9163913001ad11b8f8a3 not found: ID does not exist" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.732145 4874 scope.go:117] "RemoveContainer" containerID="e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e" Feb 17 17:17:29 crc kubenswrapper[4874]: E0217 17:17:29.732508 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e\": container with ID starting with e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e not found: ID does not exist" containerID="e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e" Feb 17 17:17:29 crc kubenswrapper[4874]: I0217 17:17:29.732544 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e"} err="failed to get container status \"e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e\": rpc error: code = NotFound desc = could not find container \"e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e\": container with ID starting with e561d6494396c1c88380440791e7c4e07845c29a09963efd7fdccb1c85bd670e not found: ID does not exist" Feb 17 17:17:30 crc kubenswrapper[4874]: I0217 17:17:30.468282 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27dd992-4ada-47f2-b8eb-da2fee20b636" path="/var/lib/kubelet/pods/c27dd992-4ada-47f2-b8eb-da2fee20b636/volumes" Feb 17 17:17:35 crc kubenswrapper[4874]: E0217 17:17:35.460128 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:17:42 crc kubenswrapper[4874]: E0217 17:17:42.461413 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:17:49 crc kubenswrapper[4874]: E0217 17:17:49.461606 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:17:55 crc kubenswrapper[4874]: E0217 17:17:55.461152 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:17:57 crc kubenswrapper[4874]: I0217 17:17:57.726183 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:17:57 crc kubenswrapper[4874]: I0217 17:17:57.726464 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:18:00 crc kubenswrapper[4874]: E0217 17:18:00.467738 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:18:07 crc kubenswrapper[4874]: E0217 17:18:07.459809 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:18:13 crc kubenswrapper[4874]: E0217 17:18:13.459405 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:18:19 crc kubenswrapper[4874]: E0217 17:18:19.459872 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:18:24 crc kubenswrapper[4874]: E0217 17:18:24.459361 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:18:27 crc kubenswrapper[4874]: I0217 17:18:27.724516 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:18:27 crc kubenswrapper[4874]: I0217 17:18:27.725264 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:18:30 crc kubenswrapper[4874]: E0217 17:18:30.467621 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:18:38 crc kubenswrapper[4874]: E0217 17:18:38.460338 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:18:41 crc kubenswrapper[4874]: E0217 17:18:41.459636 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:18:53 crc kubenswrapper[4874]: E0217 17:18:53.459253 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:18:54 crc kubenswrapper[4874]: E0217 17:18:54.461236 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:18:57 crc kubenswrapper[4874]: I0217 17:18:57.724802 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:18:57 crc kubenswrapper[4874]: I0217 17:18:57.725437 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:18:57 crc kubenswrapper[4874]: I0217 17:18:57.725492 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 17:18:57 crc kubenswrapper[4874]: I0217 17:18:57.726534 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:18:57 crc kubenswrapper[4874]: I0217 17:18:57.726596 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" gracePeriod=600 Feb 17 17:18:58 crc kubenswrapper[4874]: I0217 17:18:58.504430 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" exitCode=0 Feb 17 17:18:58 crc kubenswrapper[4874]: I0217 17:18:58.504654 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930"} Feb 17 17:18:58 crc kubenswrapper[4874]: I0217 17:18:58.504810 4874 scope.go:117] "RemoveContainer" containerID="320e1274d23336488261677302e04e3092a84dcf44ec4b9fe376a0a58ba24575" Feb 17 17:18:58 crc kubenswrapper[4874]: E0217 17:18:58.543153 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:18:59 crc kubenswrapper[4874]: I0217 17:18:59.516201 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:18:59 crc kubenswrapper[4874]: E0217 17:18:59.516788 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:19:06 crc kubenswrapper[4874]: E0217 17:19:06.459914 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:19:08 crc kubenswrapper[4874]: E0217 17:19:08.460153 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:19:14 crc kubenswrapper[4874]: I0217 17:19:14.458033 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:19:14 crc kubenswrapper[4874]: E0217 17:19:14.459798 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:19:19 crc kubenswrapper[4874]: E0217 17:19:19.459215 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:19:21 crc kubenswrapper[4874]: E0217 17:19:21.459905 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:19:26 crc kubenswrapper[4874]: I0217 17:19:26.457859 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:19:26 crc kubenswrapper[4874]: E0217 17:19:26.459122 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:19:31 crc kubenswrapper[4874]: E0217 17:19:31.459625 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:19:32 crc kubenswrapper[4874]: E0217 17:19:32.460618 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:19:38 crc kubenswrapper[4874]: I0217 17:19:38.459160 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:19:38 crc kubenswrapper[4874]: E0217 17:19:38.460526 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:19:44 crc kubenswrapper[4874]: E0217 17:19:44.460830 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:19:46 crc kubenswrapper[4874]: E0217 17:19:46.460508 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:19:49 crc kubenswrapper[4874]: I0217 17:19:49.457831 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:19:49 crc kubenswrapper[4874]: E0217 17:19:49.458791 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:19:58 crc kubenswrapper[4874]: E0217 17:19:58.460708 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:20:00 crc kubenswrapper[4874]: I0217 17:20:00.464872 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:20:00 crc kubenswrapper[4874]: E0217 17:20:00.465731 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:20:01 crc kubenswrapper[4874]: I0217 17:20:01.460995 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:20:01 crc kubenswrapper[4874]: E0217 17:20:01.601371 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:20:01 crc kubenswrapper[4874]: E0217 17:20:01.601435 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:20:01 crc kubenswrapper[4874]: E0217 17:20:01.601571 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:20:01 crc kubenswrapper[4874]: E0217 17:20:01.603313 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:20:11 crc kubenswrapper[4874]: E0217 17:20:11.572982 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:20:11 crc kubenswrapper[4874]: E0217 17:20:11.573443 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:20:11 crc kubenswrapper[4874]: E0217 17:20:11.573599 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:20:11 crc kubenswrapper[4874]: E0217 17:20:11.575483 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:20:13 crc kubenswrapper[4874]: I0217 17:20:13.458222 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:20:13 crc kubenswrapper[4874]: E0217 17:20:13.458922 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:20:14 crc kubenswrapper[4874]: E0217 17:20:14.458998 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:20:26 crc kubenswrapper[4874]: I0217 17:20:26.457481 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:20:26 crc kubenswrapper[4874]: E0217 17:20:26.458333 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:20:26 crc kubenswrapper[4874]: E0217 17:20:26.459741 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:20:27 crc kubenswrapper[4874]: E0217 17:20:27.458921 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:20:38 crc kubenswrapper[4874]: I0217 17:20:38.459177 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:20:38 crc kubenswrapper[4874]: E0217 17:20:38.460042 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:20:39 crc kubenswrapper[4874]: E0217 17:20:39.460728 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:20:41 crc kubenswrapper[4874]: E0217 17:20:41.460330 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:20:51 crc kubenswrapper[4874]: E0217 17:20:51.460302 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:20:52 crc kubenswrapper[4874]: E0217 17:20:52.460544 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:20:53 crc kubenswrapper[4874]: I0217 17:20:53.457148 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:20:53 crc kubenswrapper[4874]: E0217 17:20:53.457736 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:21:05 crc kubenswrapper[4874]: E0217 17:21:05.459567 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:21:07 crc kubenswrapper[4874]: E0217 17:21:07.460869 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:21:08 crc kubenswrapper[4874]: I0217 17:21:08.457847 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:21:08 crc kubenswrapper[4874]: E0217 17:21:08.458580 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:21:17 crc kubenswrapper[4874]: E0217 17:21:17.461726 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:21:18 crc kubenswrapper[4874]: E0217 17:21:18.459860 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:21:19 crc kubenswrapper[4874]: I0217 17:21:19.457553 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:21:19 crc kubenswrapper[4874]: E0217 17:21:19.458346 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:21:30 crc kubenswrapper[4874]: I0217 17:21:30.472942 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:21:30 crc kubenswrapper[4874]: E0217 17:21:30.473833 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:21:31 crc kubenswrapper[4874]: E0217 17:21:31.460847 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:21:32 crc kubenswrapper[4874]: E0217 17:21:32.459391 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:21:43 crc kubenswrapper[4874]: I0217 17:21:43.458180 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:21:43 crc kubenswrapper[4874]: E0217 17:21:43.459114 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:21:44 crc kubenswrapper[4874]: E0217 17:21:44.460618 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:21:45 crc kubenswrapper[4874]: E0217 17:21:45.459624 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.481259 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4klpv"] Feb 17 17:21:56 crc kubenswrapper[4874]: E0217 17:21:56.482413 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerName="registry-server" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.482430 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerName="registry-server" Feb 17 17:21:56 crc kubenswrapper[4874]: E0217 17:21:56.482468 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerName="extract-utilities" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.482479 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerName="extract-utilities" Feb 17 17:21:56 crc kubenswrapper[4874]: E0217 17:21:56.482492 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerName="extract-content" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.482500 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerName="extract-content" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.482808 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27dd992-4ada-47f2-b8eb-da2fee20b636" containerName="registry-server" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.485058 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4klpv"] Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.485247 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.565784 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwdxf\" (UniqueName: \"kubernetes.io/projected/46a6e803-08ff-44f3-87f9-c4e42585b446-kube-api-access-bwdxf\") pod \"certified-operators-4klpv\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.565876 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-utilities\") pod \"certified-operators-4klpv\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.565958 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-catalog-content\") pod \"certified-operators-4klpv\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.667643 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwdxf\" (UniqueName: \"kubernetes.io/projected/46a6e803-08ff-44f3-87f9-c4e42585b446-kube-api-access-bwdxf\") pod \"certified-operators-4klpv\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.668004 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-utilities\") pod \"certified-operators-4klpv\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.668099 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-catalog-content\") pod \"certified-operators-4klpv\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.668540 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-utilities\") pod \"certified-operators-4klpv\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.668676 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-catalog-content\") pod \"certified-operators-4klpv\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.690913 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwdxf\" (UniqueName: \"kubernetes.io/projected/46a6e803-08ff-44f3-87f9-c4e42585b446-kube-api-access-bwdxf\") pod \"certified-operators-4klpv\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:56 crc kubenswrapper[4874]: I0217 17:21:56.812192 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:21:57 crc kubenswrapper[4874]: I0217 17:21:57.368920 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4klpv"] Feb 17 17:21:57 crc kubenswrapper[4874]: I0217 17:21:57.635682 4874 generic.go:334] "Generic (PLEG): container finished" podID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerID="99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e" exitCode=0 Feb 17 17:21:57 crc kubenswrapper[4874]: I0217 17:21:57.635775 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4klpv" event={"ID":"46a6e803-08ff-44f3-87f9-c4e42585b446","Type":"ContainerDied","Data":"99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e"} Feb 17 17:21:57 crc kubenswrapper[4874]: I0217 17:21:57.635859 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4klpv" event={"ID":"46a6e803-08ff-44f3-87f9-c4e42585b446","Type":"ContainerStarted","Data":"4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1"} Feb 17 17:21:58 crc kubenswrapper[4874]: I0217 17:21:58.457766 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:21:58 crc kubenswrapper[4874]: E0217 17:21:58.458357 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:21:58 crc kubenswrapper[4874]: E0217 17:21:58.459424 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:21:58 crc kubenswrapper[4874]: I0217 17:21:58.650273 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4klpv" event={"ID":"46a6e803-08ff-44f3-87f9-c4e42585b446","Type":"ContainerStarted","Data":"3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8"} Feb 17 17:21:59 crc kubenswrapper[4874]: I0217 17:21:59.664243 4874 generic.go:334] "Generic (PLEG): container finished" podID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerID="3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8" exitCode=0 Feb 17 17:21:59 crc kubenswrapper[4874]: I0217 17:21:59.664620 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4klpv" event={"ID":"46a6e803-08ff-44f3-87f9-c4e42585b446","Type":"ContainerDied","Data":"3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8"} Feb 17 17:22:00 crc kubenswrapper[4874]: E0217 17:22:00.469954 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:22:00 crc kubenswrapper[4874]: I0217 17:22:00.676558 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4klpv" event={"ID":"46a6e803-08ff-44f3-87f9-c4e42585b446","Type":"ContainerStarted","Data":"039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b"} Feb 17 17:22:00 crc kubenswrapper[4874]: I0217 17:22:00.699440 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4klpv" podStartSLOduration=2.252843348 podStartE2EDuration="4.699414235s" podCreationTimestamp="2026-02-17 17:21:56 +0000 UTC" firstStartedPulling="2026-02-17 17:21:57.638495789 +0000 UTC m=+4727.932884350" lastFinishedPulling="2026-02-17 17:22:00.085066676 +0000 UTC m=+4730.379455237" observedRunningTime="2026-02-17 17:22:00.69475672 +0000 UTC m=+4730.989145321" watchObservedRunningTime="2026-02-17 17:22:00.699414235 +0000 UTC m=+4730.993802816" Feb 17 17:22:06 crc kubenswrapper[4874]: I0217 17:22:06.813561 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:22:06 crc kubenswrapper[4874]: I0217 17:22:06.814373 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:22:07 crc kubenswrapper[4874]: I0217 17:22:07.667826 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:22:07 crc kubenswrapper[4874]: I0217 17:22:07.801031 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:22:07 crc kubenswrapper[4874]: I0217 17:22:07.911335 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4klpv"] Feb 17 17:22:09 crc kubenswrapper[4874]: E0217 17:22:09.460997 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:22:09 crc kubenswrapper[4874]: I0217 17:22:09.774750 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4klpv" podUID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerName="registry-server" containerID="cri-o://039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b" gracePeriod=2 Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.270123 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.330844 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-catalog-content\") pod \"46a6e803-08ff-44f3-87f9-c4e42585b446\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.330991 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwdxf\" (UniqueName: \"kubernetes.io/projected/46a6e803-08ff-44f3-87f9-c4e42585b446-kube-api-access-bwdxf\") pod \"46a6e803-08ff-44f3-87f9-c4e42585b446\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.331012 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-utilities\") pod \"46a6e803-08ff-44f3-87f9-c4e42585b446\" (UID: \"46a6e803-08ff-44f3-87f9-c4e42585b446\") " Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.331760 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-utilities" (OuterVolumeSpecName: "utilities") pod "46a6e803-08ff-44f3-87f9-c4e42585b446" (UID: "46a6e803-08ff-44f3-87f9-c4e42585b446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.336376 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a6e803-08ff-44f3-87f9-c4e42585b446-kube-api-access-bwdxf" (OuterVolumeSpecName: "kube-api-access-bwdxf") pod "46a6e803-08ff-44f3-87f9-c4e42585b446" (UID: "46a6e803-08ff-44f3-87f9-c4e42585b446"). InnerVolumeSpecName "kube-api-access-bwdxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.385625 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46a6e803-08ff-44f3-87f9-c4e42585b446" (UID: "46a6e803-08ff-44f3-87f9-c4e42585b446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.434584 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.434632 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwdxf\" (UniqueName: \"kubernetes.io/projected/46a6e803-08ff-44f3-87f9-c4e42585b446-kube-api-access-bwdxf\") on node \"crc\" DevicePath \"\"" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.434650 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a6e803-08ff-44f3-87f9-c4e42585b446-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.787399 4874 generic.go:334] "Generic (PLEG): container finished" podID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerID="039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b" exitCode=0 Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.787452 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4klpv" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.787446 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4klpv" event={"ID":"46a6e803-08ff-44f3-87f9-c4e42585b446","Type":"ContainerDied","Data":"039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b"} Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.787525 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4klpv" event={"ID":"46a6e803-08ff-44f3-87f9-c4e42585b446","Type":"ContainerDied","Data":"4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1"} Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.787561 4874 scope.go:117] "RemoveContainer" containerID="039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.815059 4874 scope.go:117] "RemoveContainer" containerID="3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.819354 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4klpv"] Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.829492 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4klpv"] Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.848551 4874 scope.go:117] "RemoveContainer" containerID="99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.902219 4874 scope.go:117] "RemoveContainer" containerID="039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b" Feb 17 17:22:10 crc kubenswrapper[4874]: E0217 17:22:10.902937 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b\": container with ID starting with 039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b not found: ID does not exist" containerID="039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.902967 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b"} err="failed to get container status \"039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b\": rpc error: code = NotFound desc = could not find container \"039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b\": container with ID starting with 039125b14c0877216b20b9dd26fa96271f99de75d1f707d6e926bac11964506b not found: ID does not exist" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.903175 4874 scope.go:117] "RemoveContainer" containerID="3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8" Feb 17 17:22:10 crc kubenswrapper[4874]: E0217 17:22:10.903605 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8\": container with ID starting with 3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8 not found: ID does not exist" containerID="3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.903661 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8"} err="failed to get container status \"3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8\": rpc error: code = NotFound desc = could not find container \"3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8\": container with ID starting with 3101832f1c7def86147e73862c11dccbf176a5d9fe07335bd837b87986223fb8 not found: ID does not exist" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.903705 4874 scope.go:117] "RemoveContainer" containerID="99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e" Feb 17 17:22:10 crc kubenswrapper[4874]: E0217 17:22:10.904010 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e\": container with ID starting with 99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e not found: ID does not exist" containerID="99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e" Feb 17 17:22:10 crc kubenswrapper[4874]: I0217 17:22:10.904262 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e"} err="failed to get container status \"99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e\": rpc error: code = NotFound desc = could not find container \"99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e\": container with ID starting with 99dcc90593215581f39e02ced41c7760a78495e6005f5e22307c39e89f5bed6e not found: ID does not exist" Feb 17 17:22:11 crc kubenswrapper[4874]: E0217 17:22:11.459269 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:22:12 crc kubenswrapper[4874]: I0217 17:22:12.457947 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:22:12 crc kubenswrapper[4874]: E0217 17:22:12.458373 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:22:12 crc kubenswrapper[4874]: I0217 17:22:12.476939 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a6e803-08ff-44f3-87f9-c4e42585b446" path="/var/lib/kubelet/pods/46a6e803-08ff-44f3-87f9-c4e42585b446/volumes" Feb 17 17:22:13 crc kubenswrapper[4874]: E0217 17:22:13.796264 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:16 crc kubenswrapper[4874]: E0217 17:22:16.885344 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.034416 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl"] Feb 17 17:22:18 crc kubenswrapper[4874]: E0217 17:22:18.035206 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerName="extract-utilities" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.035218 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerName="extract-utilities" Feb 17 17:22:18 crc kubenswrapper[4874]: E0217 17:22:18.035246 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerName="registry-server" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.035252 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerName="registry-server" Feb 17 17:22:18 crc kubenswrapper[4874]: E0217 17:22:18.035277 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerName="extract-content" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.035284 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerName="extract-content" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.035526 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a6e803-08ff-44f3-87f9-c4e42585b446" containerName="registry-server" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.036519 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.039306 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.039727 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.040000 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.042104 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dsqj4" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.056598 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl"] Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.226821 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8rm6\" (UniqueName: \"kubernetes.io/projected/8331d1e2-3512-4f93-a2aa-482f566f53c9-kube-api-access-w8rm6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-585pl\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.226952 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-585pl\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.226989 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-585pl\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.329942 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8rm6\" (UniqueName: \"kubernetes.io/projected/8331d1e2-3512-4f93-a2aa-482f566f53c9-kube-api-access-w8rm6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-585pl\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.330066 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-585pl\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:18 crc kubenswrapper[4874]: I0217 17:22:18.330137 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-585pl\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:19 crc kubenswrapper[4874]: I0217 17:22:19.134498 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-585pl\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:19 crc kubenswrapper[4874]: I0217 17:22:19.134505 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-585pl\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:19 crc kubenswrapper[4874]: I0217 17:22:19.135890 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8rm6\" (UniqueName: \"kubernetes.io/projected/8331d1e2-3512-4f93-a2aa-482f566f53c9-kube-api-access-w8rm6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-585pl\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:19 crc kubenswrapper[4874]: I0217 17:22:19.257346 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:22:19 crc kubenswrapper[4874]: W0217 17:22:19.837288 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8331d1e2_3512_4f93_a2aa_482f566f53c9.slice/crio-29eeae173a65b9bf1977faa554a847d74ff8f45074aa6082d183d24f9fbebf94 WatchSource:0}: Error finding container 29eeae173a65b9bf1977faa554a847d74ff8f45074aa6082d183d24f9fbebf94: Status 404 returned error can't find the container with id 29eeae173a65b9bf1977faa554a847d74ff8f45074aa6082d183d24f9fbebf94 Feb 17 17:22:19 crc kubenswrapper[4874]: I0217 17:22:19.838800 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl"] Feb 17 17:22:19 crc kubenswrapper[4874]: I0217 17:22:19.891376 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" event={"ID":"8331d1e2-3512-4f93-a2aa-482f566f53c9","Type":"ContainerStarted","Data":"29eeae173a65b9bf1977faa554a847d74ff8f45074aa6082d183d24f9fbebf94"} Feb 17 17:22:20 crc kubenswrapper[4874]: I0217 17:22:20.925297 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" event={"ID":"8331d1e2-3512-4f93-a2aa-482f566f53c9","Type":"ContainerStarted","Data":"5eed20e0e6fb61284d3c3a7c19d20715d1af242e14dfe0d7d8cd5151005b6dd5"} Feb 17 17:22:20 crc kubenswrapper[4874]: I0217 17:22:20.969368 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" podStartSLOduration=2.556098316 podStartE2EDuration="2.969341915s" podCreationTimestamp="2026-02-17 17:22:18 +0000 UTC" firstStartedPulling="2026-02-17 17:22:19.840213203 +0000 UTC m=+4750.134601764" lastFinishedPulling="2026-02-17 17:22:20.253456802 +0000 UTC m=+4750.547845363" observedRunningTime="2026-02-17 17:22:20.959394589 +0000 UTC m=+4751.253783140" watchObservedRunningTime="2026-02-17 17:22:20.969341915 +0000 UTC m=+4751.263730496" Feb 17 17:22:24 crc kubenswrapper[4874]: E0217 17:22:24.459589 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:22:24 crc kubenswrapper[4874]: E0217 17:22:24.459763 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:22:25 crc kubenswrapper[4874]: I0217 17:22:25.457855 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:22:25 crc kubenswrapper[4874]: E0217 17:22:25.458410 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:22:27 crc kubenswrapper[4874]: E0217 17:22:27.162163 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:28 crc kubenswrapper[4874]: E0217 17:22:28.556255 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:37 crc kubenswrapper[4874]: E0217 17:22:37.445789 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:37 crc kubenswrapper[4874]: I0217 17:22:37.459458 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:22:37 crc kubenswrapper[4874]: E0217 17:22:37.459788 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:22:37 crc kubenswrapper[4874]: E0217 17:22:37.459871 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:22:39 crc kubenswrapper[4874]: E0217 17:22:39.460307 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:22:43 crc kubenswrapper[4874]: E0217 17:22:43.833603 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:47 crc kubenswrapper[4874]: E0217 17:22:47.499011 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:48 crc kubenswrapper[4874]: E0217 17:22:48.109192 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:48 crc kubenswrapper[4874]: E0217 17:22:48.110155 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:48 crc kubenswrapper[4874]: I0217 17:22:48.458324 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:22:48 crc kubenswrapper[4874]: E0217 17:22:48.458628 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:22:52 crc kubenswrapper[4874]: E0217 17:22:52.459053 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:22:54 crc kubenswrapper[4874]: E0217 17:22:54.460375 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:22:57 crc kubenswrapper[4874]: E0217 17:22:57.812723 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:22:58 crc kubenswrapper[4874]: E0217 17:22:58.558824 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:23:00 crc kubenswrapper[4874]: I0217 17:23:00.465156 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:23:00 crc kubenswrapper[4874]: E0217 17:23:00.465837 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:23:03 crc kubenswrapper[4874]: E0217 17:23:03.462516 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:23:06 crc kubenswrapper[4874]: E0217 17:23:06.460067 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:23:08 crc kubenswrapper[4874]: E0217 17:23:08.102043 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice/crio-4c5d11f87369fdc47e613fb1f0dfe2b0650a2f225bbe96a3f2049dabcca94ae1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a6e803_08ff_44f3_87f9_c4e42585b446.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:23:10 crc kubenswrapper[4874]: E0217 17:23:10.502902 4874 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/085e1a1cd59c4862e3e9faf049b3ffda8c6c2cfe147943be39f25c87c624c322/diff" to get inode usage: stat /var/lib/containers/storage/overlay/085e1a1cd59c4862e3e9faf049b3ffda8c6c2cfe147943be39f25c87c624c322/diff: no such file or directory, extraDiskErr: Feb 17 17:23:14 crc kubenswrapper[4874]: I0217 17:23:14.457451 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:23:14 crc kubenswrapper[4874]: E0217 17:23:14.458131 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:23:16 crc kubenswrapper[4874]: E0217 17:23:16.460549 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:23:19 crc kubenswrapper[4874]: E0217 17:23:19.459264 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:23:26 crc kubenswrapper[4874]: I0217 17:23:26.459040 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:23:26 crc kubenswrapper[4874]: E0217 17:23:26.459920 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:23:28 crc kubenswrapper[4874]: E0217 17:23:28.462336 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:23:33 crc kubenswrapper[4874]: E0217 17:23:33.459224 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:23:41 crc kubenswrapper[4874]: I0217 17:23:41.457794 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:23:41 crc kubenswrapper[4874]: E0217 17:23:41.458941 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:23:43 crc kubenswrapper[4874]: E0217 17:23:43.461271 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:23:45 crc kubenswrapper[4874]: E0217 17:23:45.459225 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:23:52 crc kubenswrapper[4874]: I0217 17:23:52.458026 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:23:52 crc kubenswrapper[4874]: E0217 17:23:52.458978 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:23:56 crc kubenswrapper[4874]: E0217 17:23:56.462620 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:23:58 crc kubenswrapper[4874]: E0217 17:23:58.459593 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:24:05 crc kubenswrapper[4874]: I0217 17:24:05.457241 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:24:06 crc kubenswrapper[4874]: I0217 17:24:06.091599 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"383ca1f126863bbf4daedf6695c81049c8a1fa867632ee9a9a951570814663c2"} Feb 17 17:24:10 crc kubenswrapper[4874]: E0217 17:24:10.468967 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:24:12 crc kubenswrapper[4874]: E0217 17:24:12.460306 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:24:25 crc kubenswrapper[4874]: E0217 17:24:25.462230 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:24:25 crc kubenswrapper[4874]: E0217 17:24:25.462322 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:24:36 crc kubenswrapper[4874]: E0217 17:24:36.461049 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:24:38 crc kubenswrapper[4874]: E0217 17:24:38.459441 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.521317 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-df4gv"] Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.526047 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.532843 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-df4gv"] Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.692289 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-catalog-content\") pod \"redhat-operators-df4gv\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.692790 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-utilities\") pod \"redhat-operators-df4gv\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.692828 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj66q\" (UniqueName: \"kubernetes.io/projected/863339e7-9aad-4bdc-bde6-58abd451a9f0-kube-api-access-qj66q\") pod \"redhat-operators-df4gv\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.795404 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-utilities\") pod \"redhat-operators-df4gv\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.795472 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj66q\" (UniqueName: \"kubernetes.io/projected/863339e7-9aad-4bdc-bde6-58abd451a9f0-kube-api-access-qj66q\") pod \"redhat-operators-df4gv\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.795567 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-catalog-content\") pod \"redhat-operators-df4gv\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.796012 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-utilities\") pod \"redhat-operators-df4gv\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.796189 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-catalog-content\") pod \"redhat-operators-df4gv\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.816577 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj66q\" (UniqueName: \"kubernetes.io/projected/863339e7-9aad-4bdc-bde6-58abd451a9f0-kube-api-access-qj66q\") pod \"redhat-operators-df4gv\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:48 crc kubenswrapper[4874]: I0217 17:24:48.855014 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:49 crc kubenswrapper[4874]: W0217 17:24:49.362895 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod863339e7_9aad_4bdc_bde6_58abd451a9f0.slice/crio-8f3e18c06d1f0542fa5e927adeea0ad2f0422efb4c7d47bcbec418bd9c08c8f8 WatchSource:0}: Error finding container 8f3e18c06d1f0542fa5e927adeea0ad2f0422efb4c7d47bcbec418bd9c08c8f8: Status 404 returned error can't find the container with id 8f3e18c06d1f0542fa5e927adeea0ad2f0422efb4c7d47bcbec418bd9c08c8f8 Feb 17 17:24:49 crc kubenswrapper[4874]: I0217 17:24:49.363456 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-df4gv"] Feb 17 17:24:49 crc kubenswrapper[4874]: E0217 17:24:49.459163 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:24:49 crc kubenswrapper[4874]: E0217 17:24:49.459867 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:24:49 crc kubenswrapper[4874]: I0217 17:24:49.579170 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df4gv" event={"ID":"863339e7-9aad-4bdc-bde6-58abd451a9f0","Type":"ContainerStarted","Data":"8f3e18c06d1f0542fa5e927adeea0ad2f0422efb4c7d47bcbec418bd9c08c8f8"} Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.258550 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tvbb4"] Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.262278 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.280215 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tvbb4"] Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.436361 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmkc\" (UniqueName: \"kubernetes.io/projected/b79d3a64-9e2f-4cdf-9544-354e44db5eca-kube-api-access-jbmkc\") pod \"community-operators-tvbb4\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.436412 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-utilities\") pod \"community-operators-tvbb4\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.436972 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-catalog-content\") pod \"community-operators-tvbb4\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.538882 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbmkc\" (UniqueName: \"kubernetes.io/projected/b79d3a64-9e2f-4cdf-9544-354e44db5eca-kube-api-access-jbmkc\") pod \"community-operators-tvbb4\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.538935 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-utilities\") pod \"community-operators-tvbb4\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.538977 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-catalog-content\") pod \"community-operators-tvbb4\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.539437 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-utilities\") pod \"community-operators-tvbb4\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.539520 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-catalog-content\") pod \"community-operators-tvbb4\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.574564 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbmkc\" (UniqueName: \"kubernetes.io/projected/b79d3a64-9e2f-4cdf-9544-354e44db5eca-kube-api-access-jbmkc\") pod \"community-operators-tvbb4\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.593955 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.618672 4874 generic.go:334] "Generic (PLEG): container finished" podID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerID="c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa" exitCode=0 Feb 17 17:24:50 crc kubenswrapper[4874]: I0217 17:24:50.618721 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df4gv" event={"ID":"863339e7-9aad-4bdc-bde6-58abd451a9f0","Type":"ContainerDied","Data":"c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa"} Feb 17 17:24:51 crc kubenswrapper[4874]: I0217 17:24:51.138013 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tvbb4"] Feb 17 17:24:51 crc kubenswrapper[4874]: I0217 17:24:51.628719 4874 generic.go:334] "Generic (PLEG): container finished" podID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerID="7091c3b22939079f364e73fbc6b128d6a71b7d8f8c0251470bc2d6e80a2527ac" exitCode=0 Feb 17 17:24:51 crc kubenswrapper[4874]: I0217 17:24:51.628865 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbb4" event={"ID":"b79d3a64-9e2f-4cdf-9544-354e44db5eca","Type":"ContainerDied","Data":"7091c3b22939079f364e73fbc6b128d6a71b7d8f8c0251470bc2d6e80a2527ac"} Feb 17 17:24:51 crc kubenswrapper[4874]: I0217 17:24:51.629020 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbb4" event={"ID":"b79d3a64-9e2f-4cdf-9544-354e44db5eca","Type":"ContainerStarted","Data":"39ab08689ab27ee007ceb451072dbb830dbaf8e2137fa4638123181bcfd71d21"} Feb 17 17:24:51 crc kubenswrapper[4874]: I0217 17:24:51.632004 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df4gv" event={"ID":"863339e7-9aad-4bdc-bde6-58abd451a9f0","Type":"ContainerStarted","Data":"0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72"} Feb 17 17:24:52 crc kubenswrapper[4874]: I0217 17:24:52.646772 4874 generic.go:334] "Generic (PLEG): container finished" podID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerID="0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72" exitCode=0 Feb 17 17:24:52 crc kubenswrapper[4874]: I0217 17:24:52.647034 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df4gv" event={"ID":"863339e7-9aad-4bdc-bde6-58abd451a9f0","Type":"ContainerDied","Data":"0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72"} Feb 17 17:24:53 crc kubenswrapper[4874]: I0217 17:24:53.661872 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbb4" event={"ID":"b79d3a64-9e2f-4cdf-9544-354e44db5eca","Type":"ContainerStarted","Data":"77f368bc72d0a53dccb7fa1b7b92101a4e0bc851e1c090bf3688c535d30f4e77"} Feb 17 17:24:53 crc kubenswrapper[4874]: I0217 17:24:53.665001 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df4gv" event={"ID":"863339e7-9aad-4bdc-bde6-58abd451a9f0","Type":"ContainerStarted","Data":"9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191"} Feb 17 17:24:53 crc kubenswrapper[4874]: I0217 17:24:53.696145 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-df4gv" podStartSLOduration=3.272719863 podStartE2EDuration="5.696124448s" podCreationTimestamp="2026-02-17 17:24:48 +0000 UTC" firstStartedPulling="2026-02-17 17:24:50.651285959 +0000 UTC m=+4900.945674520" lastFinishedPulling="2026-02-17 17:24:53.074690544 +0000 UTC m=+4903.369079105" observedRunningTime="2026-02-17 17:24:53.69459902 +0000 UTC m=+4903.988987571" watchObservedRunningTime="2026-02-17 17:24:53.696124448 +0000 UTC m=+4903.990513029" Feb 17 17:24:54 crc kubenswrapper[4874]: I0217 17:24:54.679184 4874 generic.go:334] "Generic (PLEG): container finished" podID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerID="77f368bc72d0a53dccb7fa1b7b92101a4e0bc851e1c090bf3688c535d30f4e77" exitCode=0 Feb 17 17:24:54 crc kubenswrapper[4874]: I0217 17:24:54.679297 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbb4" event={"ID":"b79d3a64-9e2f-4cdf-9544-354e44db5eca","Type":"ContainerDied","Data":"77f368bc72d0a53dccb7fa1b7b92101a4e0bc851e1c090bf3688c535d30f4e77"} Feb 17 17:24:55 crc kubenswrapper[4874]: I0217 17:24:55.690899 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbb4" event={"ID":"b79d3a64-9e2f-4cdf-9544-354e44db5eca","Type":"ContainerStarted","Data":"09b01ae278a2cc3b07cbbae28811174c626600d3f834169b0760ff2dc30e3827"} Feb 17 17:24:55 crc kubenswrapper[4874]: I0217 17:24:55.720395 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tvbb4" podStartSLOduration=2.189228233 podStartE2EDuration="5.7203772s" podCreationTimestamp="2026-02-17 17:24:50 +0000 UTC" firstStartedPulling="2026-02-17 17:24:51.632361657 +0000 UTC m=+4901.926750218" lastFinishedPulling="2026-02-17 17:24:55.163510614 +0000 UTC m=+4905.457899185" observedRunningTime="2026-02-17 17:24:55.711213274 +0000 UTC m=+4906.005601855" watchObservedRunningTime="2026-02-17 17:24:55.7203772 +0000 UTC m=+4906.014765761" Feb 17 17:24:58 crc kubenswrapper[4874]: I0217 17:24:58.855592 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:24:58 crc kubenswrapper[4874]: I0217 17:24:58.856993 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:25:00 crc kubenswrapper[4874]: I0217 17:25:00.378220 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-df4gv" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerName="registry-server" probeResult="failure" output=< Feb 17 17:25:00 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 17:25:00 crc kubenswrapper[4874]: > Feb 17 17:25:00 crc kubenswrapper[4874]: I0217 17:25:00.595759 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:25:00 crc kubenswrapper[4874]: I0217 17:25:00.595815 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:25:00 crc kubenswrapper[4874]: I0217 17:25:00.651796 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:25:00 crc kubenswrapper[4874]: I0217 17:25:00.783892 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:25:00 crc kubenswrapper[4874]: I0217 17:25:00.892887 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tvbb4"] Feb 17 17:25:01 crc kubenswrapper[4874]: E0217 17:25:01.459211 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:25:02 crc kubenswrapper[4874]: I0217 17:25:02.459104 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:25:02 crc kubenswrapper[4874]: I0217 17:25:02.757928 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tvbb4" podUID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerName="registry-server" containerID="cri-o://09b01ae278a2cc3b07cbbae28811174c626600d3f834169b0760ff2dc30e3827" gracePeriod=2 Feb 17 17:25:03 crc kubenswrapper[4874]: E0217 17:25:03.406703 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:25:03 crc kubenswrapper[4874]: E0217 17:25:03.407265 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:25:03 crc kubenswrapper[4874]: E0217 17:25:03.407454 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:25:03 crc kubenswrapper[4874]: E0217 17:25:03.408927 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:25:03 crc kubenswrapper[4874]: I0217 17:25:03.771643 4874 generic.go:334] "Generic (PLEG): container finished" podID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerID="09b01ae278a2cc3b07cbbae28811174c626600d3f834169b0760ff2dc30e3827" exitCode=0 Feb 17 17:25:03 crc kubenswrapper[4874]: I0217 17:25:03.771741 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbb4" event={"ID":"b79d3a64-9e2f-4cdf-9544-354e44db5eca","Type":"ContainerDied","Data":"09b01ae278a2cc3b07cbbae28811174c626600d3f834169b0760ff2dc30e3827"} Feb 17 17:25:03 crc kubenswrapper[4874]: I0217 17:25:03.771994 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tvbb4" event={"ID":"b79d3a64-9e2f-4cdf-9544-354e44db5eca","Type":"ContainerDied","Data":"39ab08689ab27ee007ceb451072dbb830dbaf8e2137fa4638123181bcfd71d21"} Feb 17 17:25:03 crc kubenswrapper[4874]: I0217 17:25:03.772011 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39ab08689ab27ee007ceb451072dbb830dbaf8e2137fa4638123181bcfd71d21" Feb 17 17:25:03 crc kubenswrapper[4874]: I0217 17:25:03.858929 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:25:03 crc kubenswrapper[4874]: I0217 17:25:03.986019 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-catalog-content\") pod \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " Feb 17 17:25:03 crc kubenswrapper[4874]: I0217 17:25:03.986245 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-utilities\") pod \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " Feb 17 17:25:03 crc kubenswrapper[4874]: I0217 17:25:03.986284 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbmkc\" (UniqueName: \"kubernetes.io/projected/b79d3a64-9e2f-4cdf-9544-354e44db5eca-kube-api-access-jbmkc\") pod \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\" (UID: \"b79d3a64-9e2f-4cdf-9544-354e44db5eca\") " Feb 17 17:25:03 crc kubenswrapper[4874]: I0217 17:25:03.987131 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-utilities" (OuterVolumeSpecName: "utilities") pod "b79d3a64-9e2f-4cdf-9544-354e44db5eca" (UID: "b79d3a64-9e2f-4cdf-9544-354e44db5eca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:04 crc kubenswrapper[4874]: I0217 17:25:04.002632 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79d3a64-9e2f-4cdf-9544-354e44db5eca-kube-api-access-jbmkc" (OuterVolumeSpecName: "kube-api-access-jbmkc") pod "b79d3a64-9e2f-4cdf-9544-354e44db5eca" (UID: "b79d3a64-9e2f-4cdf-9544-354e44db5eca"). InnerVolumeSpecName "kube-api-access-jbmkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:25:04 crc kubenswrapper[4874]: I0217 17:25:04.060858 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b79d3a64-9e2f-4cdf-9544-354e44db5eca" (UID: "b79d3a64-9e2f-4cdf-9544-354e44db5eca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:04 crc kubenswrapper[4874]: I0217 17:25:04.089355 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:04 crc kubenswrapper[4874]: I0217 17:25:04.089384 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b79d3a64-9e2f-4cdf-9544-354e44db5eca-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:04 crc kubenswrapper[4874]: I0217 17:25:04.089393 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbmkc\" (UniqueName: \"kubernetes.io/projected/b79d3a64-9e2f-4cdf-9544-354e44db5eca-kube-api-access-jbmkc\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:04 crc kubenswrapper[4874]: I0217 17:25:04.782823 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tvbb4" Feb 17 17:25:04 crc kubenswrapper[4874]: I0217 17:25:04.814985 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tvbb4"] Feb 17 17:25:04 crc kubenswrapper[4874]: I0217 17:25:04.827531 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tvbb4"] Feb 17 17:25:06 crc kubenswrapper[4874]: I0217 17:25:06.470426 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" path="/var/lib/kubelet/pods/b79d3a64-9e2f-4cdf-9544-354e44db5eca/volumes" Feb 17 17:25:08 crc kubenswrapper[4874]: I0217 17:25:08.908572 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:25:08 crc kubenswrapper[4874]: I0217 17:25:08.976780 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:25:09 crc kubenswrapper[4874]: I0217 17:25:09.148929 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-df4gv"] Feb 17 17:25:10 crc kubenswrapper[4874]: I0217 17:25:10.860820 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-df4gv" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerName="registry-server" containerID="cri-o://9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191" gracePeriod=2 Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.426730 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.495359 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-catalog-content\") pod \"863339e7-9aad-4bdc-bde6-58abd451a9f0\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.495479 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj66q\" (UniqueName: \"kubernetes.io/projected/863339e7-9aad-4bdc-bde6-58abd451a9f0-kube-api-access-qj66q\") pod \"863339e7-9aad-4bdc-bde6-58abd451a9f0\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.495554 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-utilities\") pod \"863339e7-9aad-4bdc-bde6-58abd451a9f0\" (UID: \"863339e7-9aad-4bdc-bde6-58abd451a9f0\") " Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.496800 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-utilities" (OuterVolumeSpecName: "utilities") pod "863339e7-9aad-4bdc-bde6-58abd451a9f0" (UID: "863339e7-9aad-4bdc-bde6-58abd451a9f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.502589 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863339e7-9aad-4bdc-bde6-58abd451a9f0-kube-api-access-qj66q" (OuterVolumeSpecName: "kube-api-access-qj66q") pod "863339e7-9aad-4bdc-bde6-58abd451a9f0" (UID: "863339e7-9aad-4bdc-bde6-58abd451a9f0"). InnerVolumeSpecName "kube-api-access-qj66q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.598857 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj66q\" (UniqueName: \"kubernetes.io/projected/863339e7-9aad-4bdc-bde6-58abd451a9f0-kube-api-access-qj66q\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.598886 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.631330 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "863339e7-9aad-4bdc-bde6-58abd451a9f0" (UID: "863339e7-9aad-4bdc-bde6-58abd451a9f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.701719 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863339e7-9aad-4bdc-bde6-58abd451a9f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.877612 4874 generic.go:334] "Generic (PLEG): container finished" podID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerID="9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191" exitCode=0 Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.877673 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df4gv" event={"ID":"863339e7-9aad-4bdc-bde6-58abd451a9f0","Type":"ContainerDied","Data":"9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191"} Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.877686 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-df4gv" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.877714 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-df4gv" event={"ID":"863339e7-9aad-4bdc-bde6-58abd451a9f0","Type":"ContainerDied","Data":"8f3e18c06d1f0542fa5e927adeea0ad2f0422efb4c7d47bcbec418bd9c08c8f8"} Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.877736 4874 scope.go:117] "RemoveContainer" containerID="9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.920554 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-df4gv"] Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.922149 4874 scope.go:117] "RemoveContainer" containerID="0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72" Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.935799 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-df4gv"] Feb 17 17:25:11 crc kubenswrapper[4874]: I0217 17:25:11.949084 4874 scope.go:117] "RemoveContainer" containerID="c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa" Feb 17 17:25:12 crc kubenswrapper[4874]: I0217 17:25:12.004471 4874 scope.go:117] "RemoveContainer" containerID="9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191" Feb 17 17:25:12 crc kubenswrapper[4874]: E0217 17:25:12.005146 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191\": container with ID starting with 9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191 not found: ID does not exist" containerID="9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191" Feb 17 17:25:12 crc kubenswrapper[4874]: I0217 17:25:12.005174 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191"} err="failed to get container status \"9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191\": rpc error: code = NotFound desc = could not find container \"9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191\": container with ID starting with 9645d265a4419f432b084f276de0dc9e77c868352f0acbd73aa0fb9dc385e191 not found: ID does not exist" Feb 17 17:25:12 crc kubenswrapper[4874]: I0217 17:25:12.005194 4874 scope.go:117] "RemoveContainer" containerID="0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72" Feb 17 17:25:12 crc kubenswrapper[4874]: E0217 17:25:12.005524 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72\": container with ID starting with 0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72 not found: ID does not exist" containerID="0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72" Feb 17 17:25:12 crc kubenswrapper[4874]: I0217 17:25:12.005571 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72"} err="failed to get container status \"0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72\": rpc error: code = NotFound desc = could not find container \"0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72\": container with ID starting with 0a143faca7e9e95760e58aa88576c2e551cb82f689ab685c2c58f6141f556a72 not found: ID does not exist" Feb 17 17:25:12 crc kubenswrapper[4874]: I0217 17:25:12.005597 4874 scope.go:117] "RemoveContainer" containerID="c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa" Feb 17 17:25:12 crc kubenswrapper[4874]: E0217 17:25:12.005922 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa\": container with ID starting with c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa not found: ID does not exist" containerID="c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa" Feb 17 17:25:12 crc kubenswrapper[4874]: I0217 17:25:12.005951 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa"} err="failed to get container status \"c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa\": rpc error: code = NotFound desc = could not find container \"c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa\": container with ID starting with c722d8ad0b403c20dd62ca3fb19d5c6b369beb4eddb738eeaaccbda8d73edeaa not found: ID does not exist" Feb 17 17:25:12 crc kubenswrapper[4874]: I0217 17:25:12.472671 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" path="/var/lib/kubelet/pods/863339e7-9aad-4bdc-bde6-58abd451a9f0/volumes" Feb 17 17:25:14 crc kubenswrapper[4874]: E0217 17:25:14.460723 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:25:16 crc kubenswrapper[4874]: E0217 17:25:16.593137 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:25:16 crc kubenswrapper[4874]: E0217 17:25:16.593517 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:25:16 crc kubenswrapper[4874]: E0217 17:25:16.593696 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:25:16 crc kubenswrapper[4874]: E0217 17:25:16.594932 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:25:25 crc kubenswrapper[4874]: E0217 17:25:25.459733 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:25:29 crc kubenswrapper[4874]: E0217 17:25:29.460246 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:25:37 crc kubenswrapper[4874]: E0217 17:25:37.460048 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:25:44 crc kubenswrapper[4874]: E0217 17:25:44.459675 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:25:48 crc kubenswrapper[4874]: E0217 17:25:48.459433 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:25:56 crc kubenswrapper[4874]: E0217 17:25:56.460804 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:26:03 crc kubenswrapper[4874]: E0217 17:26:03.460002 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:26:07 crc kubenswrapper[4874]: E0217 17:26:07.460670 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:26:18 crc kubenswrapper[4874]: E0217 17:26:18.460483 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:26:20 crc kubenswrapper[4874]: E0217 17:26:20.468276 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:26:27 crc kubenswrapper[4874]: I0217 17:26:27.725025 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:27 crc kubenswrapper[4874]: I0217 17:26:27.725638 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:29 crc kubenswrapper[4874]: E0217 17:26:29.459111 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:26:32 crc kubenswrapper[4874]: E0217 17:26:32.459644 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:26:43 crc kubenswrapper[4874]: E0217 17:26:43.460577 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:26:45 crc kubenswrapper[4874]: E0217 17:26:45.460536 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:26:54 crc kubenswrapper[4874]: E0217 17:26:54.460106 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:26:57 crc kubenswrapper[4874]: I0217 17:26:57.725384 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:57 crc kubenswrapper[4874]: I0217 17:26:57.725719 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:59 crc kubenswrapper[4874]: E0217 17:26:59.459735 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:27:05 crc kubenswrapper[4874]: E0217 17:27:05.460123 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:27:12 crc kubenswrapper[4874]: E0217 17:27:12.459493 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:27:18 crc kubenswrapper[4874]: E0217 17:27:18.459747 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:27:27 crc kubenswrapper[4874]: E0217 17:27:27.461548 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:27:27 crc kubenswrapper[4874]: I0217 17:27:27.724783 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:27:27 crc kubenswrapper[4874]: I0217 17:27:27.724862 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:27:27 crc kubenswrapper[4874]: I0217 17:27:27.724924 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 17:27:27 crc kubenswrapper[4874]: I0217 17:27:27.726004 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"383ca1f126863bbf4daedf6695c81049c8a1fa867632ee9a9a951570814663c2"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:27:27 crc kubenswrapper[4874]: I0217 17:27:27.726233 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://383ca1f126863bbf4daedf6695c81049c8a1fa867632ee9a9a951570814663c2" gracePeriod=600 Feb 17 17:27:28 crc kubenswrapper[4874]: I0217 17:27:28.478591 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="383ca1f126863bbf4daedf6695c81049c8a1fa867632ee9a9a951570814663c2" exitCode=0 Feb 17 17:27:28 crc kubenswrapper[4874]: I0217 17:27:28.480349 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"383ca1f126863bbf4daedf6695c81049c8a1fa867632ee9a9a951570814663c2"} Feb 17 17:27:28 crc kubenswrapper[4874]: I0217 17:27:28.480399 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3"} Feb 17 17:27:28 crc kubenswrapper[4874]: I0217 17:27:28.480417 4874 scope.go:117] "RemoveContainer" containerID="eceef711827f302ce1c5e08eb20c45a53f4d07e75160aef288fe7efb1172e930" Feb 17 17:27:29 crc kubenswrapper[4874]: E0217 17:27:29.459465 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:27:40 crc kubenswrapper[4874]: E0217 17:27:40.468142 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:27:42 crc kubenswrapper[4874]: E0217 17:27:42.459317 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:27:52 crc kubenswrapper[4874]: E0217 17:27:52.459269 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:27:57 crc kubenswrapper[4874]: E0217 17:27:57.459092 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:28:06 crc kubenswrapper[4874]: E0217 17:28:06.459456 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:28:11 crc kubenswrapper[4874]: E0217 17:28:11.460952 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:28:19 crc kubenswrapper[4874]: E0217 17:28:19.459274 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:28:25 crc kubenswrapper[4874]: E0217 17:28:25.461015 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:28:32 crc kubenswrapper[4874]: E0217 17:28:32.463182 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:28:37 crc kubenswrapper[4874]: I0217 17:28:37.199336 4874 generic.go:334] "Generic (PLEG): container finished" podID="8331d1e2-3512-4f93-a2aa-482f566f53c9" containerID="5eed20e0e6fb61284d3c3a7c19d20715d1af242e14dfe0d7d8cd5151005b6dd5" exitCode=2 Feb 17 17:28:37 crc kubenswrapper[4874]: I0217 17:28:37.199443 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" event={"ID":"8331d1e2-3512-4f93-a2aa-482f566f53c9","Type":"ContainerDied","Data":"5eed20e0e6fb61284d3c3a7c19d20715d1af242e14dfe0d7d8cd5151005b6dd5"} Feb 17 17:28:37 crc kubenswrapper[4874]: E0217 17:28:37.459061 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.362097 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.487692 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8rm6\" (UniqueName: \"kubernetes.io/projected/8331d1e2-3512-4f93-a2aa-482f566f53c9-kube-api-access-w8rm6\") pod \"8331d1e2-3512-4f93-a2aa-482f566f53c9\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.488039 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-inventory\") pod \"8331d1e2-3512-4f93-a2aa-482f566f53c9\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.488280 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-ssh-key-openstack-edpm-ipam\") pod \"8331d1e2-3512-4f93-a2aa-482f566f53c9\" (UID: \"8331d1e2-3512-4f93-a2aa-482f566f53c9\") " Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.509823 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8331d1e2-3512-4f93-a2aa-482f566f53c9-kube-api-access-w8rm6" (OuterVolumeSpecName: "kube-api-access-w8rm6") pod "8331d1e2-3512-4f93-a2aa-482f566f53c9" (UID: "8331d1e2-3512-4f93-a2aa-482f566f53c9"). InnerVolumeSpecName "kube-api-access-w8rm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.531483 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8331d1e2-3512-4f93-a2aa-482f566f53c9" (UID: "8331d1e2-3512-4f93-a2aa-482f566f53c9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.535149 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-inventory" (OuterVolumeSpecName: "inventory") pod "8331d1e2-3512-4f93-a2aa-482f566f53c9" (UID: "8331d1e2-3512-4f93-a2aa-482f566f53c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.592771 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8rm6\" (UniqueName: \"kubernetes.io/projected/8331d1e2-3512-4f93-a2aa-482f566f53c9-kube-api-access-w8rm6\") on node \"crc\" DevicePath \"\"" Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.592801 4874 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:28:39 crc kubenswrapper[4874]: I0217 17:28:39.592811 4874 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8331d1e2-3512-4f93-a2aa-482f566f53c9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:28:40 crc kubenswrapper[4874]: I0217 17:28:40.233539 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" event={"ID":"8331d1e2-3512-4f93-a2aa-482f566f53c9","Type":"ContainerDied","Data":"29eeae173a65b9bf1977faa554a847d74ff8f45074aa6082d183d24f9fbebf94"} Feb 17 17:28:40 crc kubenswrapper[4874]: I0217 17:28:40.233861 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29eeae173a65b9bf1977faa554a847d74ff8f45074aa6082d183d24f9fbebf94" Feb 17 17:28:40 crc kubenswrapper[4874]: I0217 17:28:40.233588 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-585pl" Feb 17 17:28:45 crc kubenswrapper[4874]: E0217 17:28:45.461326 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:28:49 crc kubenswrapper[4874]: E0217 17:28:49.459695 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:28:58 crc kubenswrapper[4874]: E0217 17:28:58.459507 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:29:00 crc kubenswrapper[4874]: E0217 17:29:00.466972 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:29:10 crc kubenswrapper[4874]: E0217 17:29:10.472973 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:29:11 crc kubenswrapper[4874]: E0217 17:29:11.459099 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.660459 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tr7wf/must-gather-9kwsh"] Feb 17 17:29:17 crc kubenswrapper[4874]: E0217 17:29:17.661607 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerName="extract-utilities" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.661624 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerName="extract-utilities" Feb 17 17:29:17 crc kubenswrapper[4874]: E0217 17:29:17.661655 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerName="extract-content" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.661663 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerName="extract-content" Feb 17 17:29:17 crc kubenswrapper[4874]: E0217 17:29:17.661683 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerName="registry-server" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.661690 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerName="registry-server" Feb 17 17:29:17 crc kubenswrapper[4874]: E0217 17:29:17.661703 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerName="extract-utilities" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.661709 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerName="extract-utilities" Feb 17 17:29:17 crc kubenswrapper[4874]: E0217 17:29:17.661730 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8331d1e2-3512-4f93-a2aa-482f566f53c9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.661739 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="8331d1e2-3512-4f93-a2aa-482f566f53c9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:29:17 crc kubenswrapper[4874]: E0217 17:29:17.661759 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerName="extract-content" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.661767 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerName="extract-content" Feb 17 17:29:17 crc kubenswrapper[4874]: E0217 17:29:17.661780 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerName="registry-server" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.661788 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerName="registry-server" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.662110 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b79d3a64-9e2f-4cdf-9544-354e44db5eca" containerName="registry-server" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.662135 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="8331d1e2-3512-4f93-a2aa-482f566f53c9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.662164 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="863339e7-9aad-4bdc-bde6-58abd451a9f0" containerName="registry-server" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.663697 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.672695 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tr7wf"/"openshift-service-ca.crt" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.672910 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tr7wf"/"kube-root-ca.crt" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.703364 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tr7wf/must-gather-9kwsh"] Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.729663 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dz7\" (UniqueName: \"kubernetes.io/projected/1a6b8617-c698-42b3-9ba1-329f44aab8aa-kube-api-access-59dz7\") pod \"must-gather-9kwsh\" (UID: \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\") " pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.730209 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a6b8617-c698-42b3-9ba1-329f44aab8aa-must-gather-output\") pod \"must-gather-9kwsh\" (UID: \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\") " pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.832748 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a6b8617-c698-42b3-9ba1-329f44aab8aa-must-gather-output\") pod \"must-gather-9kwsh\" (UID: \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\") " pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.833174 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59dz7\" (UniqueName: \"kubernetes.io/projected/1a6b8617-c698-42b3-9ba1-329f44aab8aa-kube-api-access-59dz7\") pod \"must-gather-9kwsh\" (UID: \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\") " pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.833305 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a6b8617-c698-42b3-9ba1-329f44aab8aa-must-gather-output\") pod \"must-gather-9kwsh\" (UID: \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\") " pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.863604 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59dz7\" (UniqueName: \"kubernetes.io/projected/1a6b8617-c698-42b3-9ba1-329f44aab8aa-kube-api-access-59dz7\") pod \"must-gather-9kwsh\" (UID: \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\") " pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:29:17 crc kubenswrapper[4874]: I0217 17:29:17.990969 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:29:18 crc kubenswrapper[4874]: I0217 17:29:18.531110 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tr7wf/must-gather-9kwsh"] Feb 17 17:29:19 crc kubenswrapper[4874]: I0217 17:29:19.688513 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" event={"ID":"1a6b8617-c698-42b3-9ba1-329f44aab8aa","Type":"ContainerStarted","Data":"1cc715f9e2d81bb556f61ed018a2d9ef8ea52c0ceeb2b6f53cda3098c50e39b9"} Feb 17 17:29:24 crc kubenswrapper[4874]: E0217 17:29:24.460013 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:29:25 crc kubenswrapper[4874]: E0217 17:29:25.458495 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:29:25 crc kubenswrapper[4874]: I0217 17:29:25.751982 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" event={"ID":"1a6b8617-c698-42b3-9ba1-329f44aab8aa","Type":"ContainerStarted","Data":"98e1677a11fed94ec851d62d6afd1c6a1d8036a400b262f62abe3bdbc663c619"} Feb 17 17:29:26 crc kubenswrapper[4874]: I0217 17:29:26.765359 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" event={"ID":"1a6b8617-c698-42b3-9ba1-329f44aab8aa","Type":"ContainerStarted","Data":"3e1bdc5edd484ef9598bf0b3597bd0f09ad8d37e83c70e7eaaefe02f55717d02"} Feb 17 17:29:26 crc kubenswrapper[4874]: I0217 17:29:26.790312 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" podStartSLOduration=3.235997521 podStartE2EDuration="9.79028322s" podCreationTimestamp="2026-02-17 17:29:17 +0000 UTC" firstStartedPulling="2026-02-17 17:29:18.842617651 +0000 UTC m=+5169.137006212" lastFinishedPulling="2026-02-17 17:29:25.39690335 +0000 UTC m=+5175.691291911" observedRunningTime="2026-02-17 17:29:26.781401481 +0000 UTC m=+5177.075790072" watchObservedRunningTime="2026-02-17 17:29:26.79028322 +0000 UTC m=+5177.084671861" Feb 17 17:29:30 crc kubenswrapper[4874]: E0217 17:29:30.177182 4874 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.73:51134->38.102.83.73:34183: write tcp 38.102.83.73:51134->38.102.83.73:34183: write: broken pipe Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.469715 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tr7wf/crc-debug-kf8k8"] Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.472000 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.473922 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tr7wf"/"default-dockercfg-dfqtd" Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.586552 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-host\") pod \"crc-debug-kf8k8\" (UID: \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\") " pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.586807 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9fcv\" (UniqueName: \"kubernetes.io/projected/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-kube-api-access-n9fcv\") pod \"crc-debug-kf8k8\" (UID: \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\") " pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.689353 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-host\") pod \"crc-debug-kf8k8\" (UID: \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\") " pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.689519 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9fcv\" (UniqueName: \"kubernetes.io/projected/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-kube-api-access-n9fcv\") pod \"crc-debug-kf8k8\" (UID: \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\") " pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.689544 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-host\") pod \"crc-debug-kf8k8\" (UID: \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\") " pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.708756 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9fcv\" (UniqueName: \"kubernetes.io/projected/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-kube-api-access-n9fcv\") pod \"crc-debug-kf8k8\" (UID: \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\") " pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:29:31 crc kubenswrapper[4874]: I0217 17:29:31.795161 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:29:32 crc kubenswrapper[4874]: I0217 17:29:32.835023 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" event={"ID":"4918d8ed-3d1c-4705-b9f6-339d3554ef0b","Type":"ContainerStarted","Data":"52eca1a43388229a10a4be970e2c4821646361c1dc9a31ffa26ba87266a26499"} Feb 17 17:29:36 crc kubenswrapper[4874]: E0217 17:29:36.460047 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:29:36 crc kubenswrapper[4874]: E0217 17:29:36.460217 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:29:43 crc kubenswrapper[4874]: I0217 17:29:43.939696 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" event={"ID":"4918d8ed-3d1c-4705-b9f6-339d3554ef0b","Type":"ContainerStarted","Data":"b891b1dbbe696b668ef7326f71a12b3209366b691ee2908704ce99490a4141f1"} Feb 17 17:29:43 crc kubenswrapper[4874]: I0217 17:29:43.955337 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" podStartSLOduration=1.8951096889999999 podStartE2EDuration="12.95531527s" podCreationTimestamp="2026-02-17 17:29:31 +0000 UTC" firstStartedPulling="2026-02-17 17:29:31.848906917 +0000 UTC m=+5182.143295478" lastFinishedPulling="2026-02-17 17:29:42.909112488 +0000 UTC m=+5193.203501059" observedRunningTime="2026-02-17 17:29:43.952121951 +0000 UTC m=+5194.246510512" watchObservedRunningTime="2026-02-17 17:29:43.95531527 +0000 UTC m=+5194.249703831" Feb 17 17:29:51 crc kubenswrapper[4874]: E0217 17:29:51.459526 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:29:51 crc kubenswrapper[4874]: E0217 17:29:51.459527 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:29:57 crc kubenswrapper[4874]: I0217 17:29:57.724320 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:29:57 crc kubenswrapper[4874]: I0217 17:29:57.724795 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.146688 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc"] Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.149112 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.151856 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.152005 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.161456 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc"] Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.236323 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6381f49-ff8d-445e-98cf-6df8c11322b2-secret-volume\") pod \"collect-profiles-29522490-2f4kc\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.236402 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5s8b\" (UniqueName: \"kubernetes.io/projected/b6381f49-ff8d-445e-98cf-6df8c11322b2-kube-api-access-z5s8b\") pod \"collect-profiles-29522490-2f4kc\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.236704 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6381f49-ff8d-445e-98cf-6df8c11322b2-config-volume\") pod \"collect-profiles-29522490-2f4kc\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.338893 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6381f49-ff8d-445e-98cf-6df8c11322b2-secret-volume\") pod \"collect-profiles-29522490-2f4kc\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.338975 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5s8b\" (UniqueName: \"kubernetes.io/projected/b6381f49-ff8d-445e-98cf-6df8c11322b2-kube-api-access-z5s8b\") pod \"collect-profiles-29522490-2f4kc\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.339081 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6381f49-ff8d-445e-98cf-6df8c11322b2-config-volume\") pod \"collect-profiles-29522490-2f4kc\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.340107 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6381f49-ff8d-445e-98cf-6df8c11322b2-config-volume\") pod \"collect-profiles-29522490-2f4kc\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.345321 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6381f49-ff8d-445e-98cf-6df8c11322b2-secret-volume\") pod \"collect-profiles-29522490-2f4kc\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.356231 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5s8b\" (UniqueName: \"kubernetes.io/projected/b6381f49-ff8d-445e-98cf-6df8c11322b2-kube-api-access-z5s8b\") pod \"collect-profiles-29522490-2f4kc\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:00 crc kubenswrapper[4874]: I0217 17:30:00.466633 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:01 crc kubenswrapper[4874]: I0217 17:30:01.068960 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc"] Feb 17 17:30:01 crc kubenswrapper[4874]: I0217 17:30:01.139258 4874 generic.go:334] "Generic (PLEG): container finished" podID="4918d8ed-3d1c-4705-b9f6-339d3554ef0b" containerID="b891b1dbbe696b668ef7326f71a12b3209366b691ee2908704ce99490a4141f1" exitCode=0 Feb 17 17:30:01 crc kubenswrapper[4874]: I0217 17:30:01.139345 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" event={"ID":"4918d8ed-3d1c-4705-b9f6-339d3554ef0b","Type":"ContainerDied","Data":"b891b1dbbe696b668ef7326f71a12b3209366b691ee2908704ce99490a4141f1"} Feb 17 17:30:01 crc kubenswrapper[4874]: I0217 17:30:01.141023 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" event={"ID":"b6381f49-ff8d-445e-98cf-6df8c11322b2","Type":"ContainerStarted","Data":"0c6a7c868a1ff96ecc5b6d0065531d38124ce29870932df69d5f285ff1328464"} Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.155269 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" event={"ID":"b6381f49-ff8d-445e-98cf-6df8c11322b2","Type":"ContainerStarted","Data":"e088c6290b9bfd8ea5d767e78a394c5fd198e9d6442bcc791ef124497af5c553"} Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.291688 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.334707 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tr7wf/crc-debug-kf8k8"] Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.346450 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tr7wf/crc-debug-kf8k8"] Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.386776 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-host\") pod \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\" (UID: \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\") " Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.386942 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9fcv\" (UniqueName: \"kubernetes.io/projected/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-kube-api-access-n9fcv\") pod \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\" (UID: \"4918d8ed-3d1c-4705-b9f6-339d3554ef0b\") " Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.387237 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-host" (OuterVolumeSpecName: "host") pod "4918d8ed-3d1c-4705-b9f6-339d3554ef0b" (UID: "4918d8ed-3d1c-4705-b9f6-339d3554ef0b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.387774 4874 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.392387 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-kube-api-access-n9fcv" (OuterVolumeSpecName: "kube-api-access-n9fcv") pod "4918d8ed-3d1c-4705-b9f6-339d3554ef0b" (UID: "4918d8ed-3d1c-4705-b9f6-339d3554ef0b"). InnerVolumeSpecName "kube-api-access-n9fcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.471346 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4918d8ed-3d1c-4705-b9f6-339d3554ef0b" path="/var/lib/kubelet/pods/4918d8ed-3d1c-4705-b9f6-339d3554ef0b/volumes" Feb 17 17:30:02 crc kubenswrapper[4874]: I0217 17:30:02.489445 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9fcv\" (UniqueName: \"kubernetes.io/projected/4918d8ed-3d1c-4705-b9f6-339d3554ef0b-kube-api-access-n9fcv\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.171459 4874 generic.go:334] "Generic (PLEG): container finished" podID="b6381f49-ff8d-445e-98cf-6df8c11322b2" containerID="e088c6290b9bfd8ea5d767e78a394c5fd198e9d6442bcc791ef124497af5c553" exitCode=0 Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.171833 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" event={"ID":"b6381f49-ff8d-445e-98cf-6df8c11322b2","Type":"ContainerDied","Data":"e088c6290b9bfd8ea5d767e78a394c5fd198e9d6442bcc791ef124497af5c553"} Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.174607 4874 scope.go:117] "RemoveContainer" containerID="b891b1dbbe696b668ef7326f71a12b3209366b691ee2908704ce99490a4141f1" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.174802 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/crc-debug-kf8k8" Feb 17 17:30:03 crc kubenswrapper[4874]: E0217 17:30:03.459016 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.554729 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tr7wf/crc-debug-s6cpm"] Feb 17 17:30:03 crc kubenswrapper[4874]: E0217 17:30:03.555961 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4918d8ed-3d1c-4705-b9f6-339d3554ef0b" containerName="container-00" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.555986 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="4918d8ed-3d1c-4705-b9f6-339d3554ef0b" containerName="container-00" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.556360 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="4918d8ed-3d1c-4705-b9f6-339d3554ef0b" containerName="container-00" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.557350 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.561603 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tr7wf"/"default-dockercfg-dfqtd" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.578754 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.616721 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2d8v\" (UniqueName: \"kubernetes.io/projected/26e0e255-e381-420a-84b8-20de42cf2b75-kube-api-access-w2d8v\") pod \"crc-debug-s6cpm\" (UID: \"26e0e255-e381-420a-84b8-20de42cf2b75\") " pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.616824 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26e0e255-e381-420a-84b8-20de42cf2b75-host\") pod \"crc-debug-s6cpm\" (UID: \"26e0e255-e381-420a-84b8-20de42cf2b75\") " pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.718118 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6381f49-ff8d-445e-98cf-6df8c11322b2-config-volume\") pod \"b6381f49-ff8d-445e-98cf-6df8c11322b2\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.718435 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6381f49-ff8d-445e-98cf-6df8c11322b2-secret-volume\") pod \"b6381f49-ff8d-445e-98cf-6df8c11322b2\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.718940 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5s8b\" (UniqueName: \"kubernetes.io/projected/b6381f49-ff8d-445e-98cf-6df8c11322b2-kube-api-access-z5s8b\") pod \"b6381f49-ff8d-445e-98cf-6df8c11322b2\" (UID: \"b6381f49-ff8d-445e-98cf-6df8c11322b2\") " Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.718937 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6381f49-ff8d-445e-98cf-6df8c11322b2-config-volume" (OuterVolumeSpecName: "config-volume") pod "b6381f49-ff8d-445e-98cf-6df8c11322b2" (UID: "b6381f49-ff8d-445e-98cf-6df8c11322b2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.719576 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2d8v\" (UniqueName: \"kubernetes.io/projected/26e0e255-e381-420a-84b8-20de42cf2b75-kube-api-access-w2d8v\") pod \"crc-debug-s6cpm\" (UID: \"26e0e255-e381-420a-84b8-20de42cf2b75\") " pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.719624 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26e0e255-e381-420a-84b8-20de42cf2b75-host\") pod \"crc-debug-s6cpm\" (UID: \"26e0e255-e381-420a-84b8-20de42cf2b75\") " pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.719778 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6381f49-ff8d-445e-98cf-6df8c11322b2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.719829 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26e0e255-e381-420a-84b8-20de42cf2b75-host\") pod \"crc-debug-s6cpm\" (UID: \"26e0e255-e381-420a-84b8-20de42cf2b75\") " pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.725314 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6381f49-ff8d-445e-98cf-6df8c11322b2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b6381f49-ff8d-445e-98cf-6df8c11322b2" (UID: "b6381f49-ff8d-445e-98cf-6df8c11322b2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.728369 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6381f49-ff8d-445e-98cf-6df8c11322b2-kube-api-access-z5s8b" (OuterVolumeSpecName: "kube-api-access-z5s8b") pod "b6381f49-ff8d-445e-98cf-6df8c11322b2" (UID: "b6381f49-ff8d-445e-98cf-6df8c11322b2"). InnerVolumeSpecName "kube-api-access-z5s8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.735870 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2d8v\" (UniqueName: \"kubernetes.io/projected/26e0e255-e381-420a-84b8-20de42cf2b75-kube-api-access-w2d8v\") pod \"crc-debug-s6cpm\" (UID: \"26e0e255-e381-420a-84b8-20de42cf2b75\") " pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.821881 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5s8b\" (UniqueName: \"kubernetes.io/projected/b6381f49-ff8d-445e-98cf-6df8c11322b2-kube-api-access-z5s8b\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.821956 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6381f49-ff8d-445e-98cf-6df8c11322b2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:03 crc kubenswrapper[4874]: I0217 17:30:03.896846 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:03 crc kubenswrapper[4874]: W0217 17:30:03.942091 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26e0e255_e381_420a_84b8_20de42cf2b75.slice/crio-76bbec1eee39ea8c0d5e08ca933e36f778d5a4a546a47dbac9ae7e82fdc020d6 WatchSource:0}: Error finding container 76bbec1eee39ea8c0d5e08ca933e36f778d5a4a546a47dbac9ae7e82fdc020d6: Status 404 returned error can't find the container with id 76bbec1eee39ea8c0d5e08ca933e36f778d5a4a546a47dbac9ae7e82fdc020d6 Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.199049 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" event={"ID":"26e0e255-e381-420a-84b8-20de42cf2b75","Type":"ContainerStarted","Data":"122d7eb20d58517c45931e96314ac4f72b66c08894740c035b807c08feaa9b94"} Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.199330 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" event={"ID":"26e0e255-e381-420a-84b8-20de42cf2b75","Type":"ContainerStarted","Data":"76bbec1eee39ea8c0d5e08ca933e36f778d5a4a546a47dbac9ae7e82fdc020d6"} Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.222281 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" event={"ID":"b6381f49-ff8d-445e-98cf-6df8c11322b2","Type":"ContainerDied","Data":"0c6a7c868a1ff96ecc5b6d0065531d38124ce29870932df69d5f285ff1328464"} Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.222317 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c6a7c868a1ff96ecc5b6d0065531d38124ce29870932df69d5f285ff1328464" Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.222372 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-2f4kc" Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.291537 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tr7wf/crc-debug-s6cpm"] Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.312129 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tr7wf/crc-debug-s6cpm"] Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.459540 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.664828 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn"] Feb 17 17:30:04 crc kubenswrapper[4874]: I0217 17:30:04.679279 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-dkqrn"] Feb 17 17:30:04 crc kubenswrapper[4874]: E0217 17:30:04.696035 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:30:04 crc kubenswrapper[4874]: E0217 17:30:04.696133 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:30:04 crc kubenswrapper[4874]: E0217 17:30:04.696297 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:30:04 crc kubenswrapper[4874]: E0217 17:30:04.698110 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:30:05 crc kubenswrapper[4874]: I0217 17:30:05.234386 4874 generic.go:334] "Generic (PLEG): container finished" podID="26e0e255-e381-420a-84b8-20de42cf2b75" containerID="122d7eb20d58517c45931e96314ac4f72b66c08894740c035b807c08feaa9b94" exitCode=1 Feb 17 17:30:05 crc kubenswrapper[4874]: I0217 17:30:05.397306 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:05 crc kubenswrapper[4874]: I0217 17:30:05.468156 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26e0e255-e381-420a-84b8-20de42cf2b75-host\") pod \"26e0e255-e381-420a-84b8-20de42cf2b75\" (UID: \"26e0e255-e381-420a-84b8-20de42cf2b75\") " Feb 17 17:30:05 crc kubenswrapper[4874]: I0217 17:30:05.468253 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26e0e255-e381-420a-84b8-20de42cf2b75-host" (OuterVolumeSpecName: "host") pod "26e0e255-e381-420a-84b8-20de42cf2b75" (UID: "26e0e255-e381-420a-84b8-20de42cf2b75"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:30:05 crc kubenswrapper[4874]: I0217 17:30:05.468627 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2d8v\" (UniqueName: \"kubernetes.io/projected/26e0e255-e381-420a-84b8-20de42cf2b75-kube-api-access-w2d8v\") pod \"26e0e255-e381-420a-84b8-20de42cf2b75\" (UID: \"26e0e255-e381-420a-84b8-20de42cf2b75\") " Feb 17 17:30:05 crc kubenswrapper[4874]: I0217 17:30:05.469377 4874 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26e0e255-e381-420a-84b8-20de42cf2b75-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:05 crc kubenswrapper[4874]: I0217 17:30:05.486671 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e0e255-e381-420a-84b8-20de42cf2b75-kube-api-access-w2d8v" (OuterVolumeSpecName: "kube-api-access-w2d8v") pod "26e0e255-e381-420a-84b8-20de42cf2b75" (UID: "26e0e255-e381-420a-84b8-20de42cf2b75"). InnerVolumeSpecName "kube-api-access-w2d8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:05 crc kubenswrapper[4874]: I0217 17:30:05.572055 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2d8v\" (UniqueName: \"kubernetes.io/projected/26e0e255-e381-420a-84b8-20de42cf2b75-kube-api-access-w2d8v\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:06 crc kubenswrapper[4874]: I0217 17:30:06.246583 4874 scope.go:117] "RemoveContainer" containerID="122d7eb20d58517c45931e96314ac4f72b66c08894740c035b807c08feaa9b94" Feb 17 17:30:06 crc kubenswrapper[4874]: I0217 17:30:06.246619 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/crc-debug-s6cpm" Feb 17 17:30:06 crc kubenswrapper[4874]: I0217 17:30:06.470818 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e0e255-e381-420a-84b8-20de42cf2b75" path="/var/lib/kubelet/pods/26e0e255-e381-420a-84b8-20de42cf2b75/volumes" Feb 17 17:30:06 crc kubenswrapper[4874]: I0217 17:30:06.472329 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2fd6a42-869b-4b7a-a3df-76e5f43b0da2" path="/var/lib/kubelet/pods/a2fd6a42-869b-4b7a-a3df-76e5f43b0da2/volumes" Feb 17 17:30:15 crc kubenswrapper[4874]: E0217 17:30:15.460819 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:30:16 crc kubenswrapper[4874]: E0217 17:30:16.459735 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.706751 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rqg5p"] Feb 17 17:30:18 crc kubenswrapper[4874]: E0217 17:30:18.707458 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e0e255-e381-420a-84b8-20de42cf2b75" containerName="container-00" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.707469 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e0e255-e381-420a-84b8-20de42cf2b75" containerName="container-00" Feb 17 17:30:18 crc kubenswrapper[4874]: E0217 17:30:18.707481 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6381f49-ff8d-445e-98cf-6df8c11322b2" containerName="collect-profiles" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.707486 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6381f49-ff8d-445e-98cf-6df8c11322b2" containerName="collect-profiles" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.707674 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e0e255-e381-420a-84b8-20de42cf2b75" containerName="container-00" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.707687 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6381f49-ff8d-445e-98cf-6df8c11322b2" containerName="collect-profiles" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.709363 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.726400 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqg5p"] Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.792700 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-utilities\") pod \"redhat-marketplace-rqg5p\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.792952 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8b6s\" (UniqueName: \"kubernetes.io/projected/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-kube-api-access-b8b6s\") pod \"redhat-marketplace-rqg5p\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.793037 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-catalog-content\") pod \"redhat-marketplace-rqg5p\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.895493 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-utilities\") pod \"redhat-marketplace-rqg5p\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.895725 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8b6s\" (UniqueName: \"kubernetes.io/projected/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-kube-api-access-b8b6s\") pod \"redhat-marketplace-rqg5p\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.895795 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-catalog-content\") pod \"redhat-marketplace-rqg5p\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.896307 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-catalog-content\") pod \"redhat-marketplace-rqg5p\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.896523 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-utilities\") pod \"redhat-marketplace-rqg5p\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:18 crc kubenswrapper[4874]: I0217 17:30:18.918851 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8b6s\" (UniqueName: \"kubernetes.io/projected/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-kube-api-access-b8b6s\") pod \"redhat-marketplace-rqg5p\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:19 crc kubenswrapper[4874]: I0217 17:30:19.033942 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:19 crc kubenswrapper[4874]: I0217 17:30:19.576688 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqg5p"] Feb 17 17:30:19 crc kubenswrapper[4874]: W0217 17:30:19.576790 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f8add2_c4c8_49fe_b9d7_54cd148986e3.slice/crio-db3a9e81d0739bff38fbb43cb8147306c18ca3208cf2b642e92e7cb5a4a8e2f2 WatchSource:0}: Error finding container db3a9e81d0739bff38fbb43cb8147306c18ca3208cf2b642e92e7cb5a4a8e2f2: Status 404 returned error can't find the container with id db3a9e81d0739bff38fbb43cb8147306c18ca3208cf2b642e92e7cb5a4a8e2f2 Feb 17 17:30:20 crc kubenswrapper[4874]: I0217 17:30:20.380671 4874 generic.go:334] "Generic (PLEG): container finished" podID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerID="a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9" exitCode=0 Feb 17 17:30:20 crc kubenswrapper[4874]: I0217 17:30:20.380725 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqg5p" event={"ID":"e6f8add2-c4c8-49fe-b9d7-54cd148986e3","Type":"ContainerDied","Data":"a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9"} Feb 17 17:30:20 crc kubenswrapper[4874]: I0217 17:30:20.380948 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqg5p" event={"ID":"e6f8add2-c4c8-49fe-b9d7-54cd148986e3","Type":"ContainerStarted","Data":"db3a9e81d0739bff38fbb43cb8147306c18ca3208cf2b642e92e7cb5a4a8e2f2"} Feb 17 17:30:22 crc kubenswrapper[4874]: I0217 17:30:22.403685 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqg5p" event={"ID":"e6f8add2-c4c8-49fe-b9d7-54cd148986e3","Type":"ContainerStarted","Data":"6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516"} Feb 17 17:30:24 crc kubenswrapper[4874]: I0217 17:30:24.427744 4874 generic.go:334] "Generic (PLEG): container finished" podID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerID="6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516" exitCode=0 Feb 17 17:30:24 crc kubenswrapper[4874]: I0217 17:30:24.427832 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqg5p" event={"ID":"e6f8add2-c4c8-49fe-b9d7-54cd148986e3","Type":"ContainerDied","Data":"6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516"} Feb 17 17:30:25 crc kubenswrapper[4874]: I0217 17:30:25.440598 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqg5p" event={"ID":"e6f8add2-c4c8-49fe-b9d7-54cd148986e3","Type":"ContainerStarted","Data":"58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f"} Feb 17 17:30:25 crc kubenswrapper[4874]: I0217 17:30:25.467717 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rqg5p" podStartSLOduration=3.032220126 podStartE2EDuration="7.467695556s" podCreationTimestamp="2026-02-17 17:30:18 +0000 UTC" firstStartedPulling="2026-02-17 17:30:20.382936933 +0000 UTC m=+5230.677325494" lastFinishedPulling="2026-02-17 17:30:24.818412373 +0000 UTC m=+5235.112800924" observedRunningTime="2026-02-17 17:30:25.458729485 +0000 UTC m=+5235.753118056" watchObservedRunningTime="2026-02-17 17:30:25.467695556 +0000 UTC m=+5235.762084117" Feb 17 17:30:27 crc kubenswrapper[4874]: I0217 17:30:27.724550 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:30:27 crc kubenswrapper[4874]: I0217 17:30:27.724967 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:30:29 crc kubenswrapper[4874]: I0217 17:30:29.035341 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:29 crc kubenswrapper[4874]: I0217 17:30:29.035622 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:29 crc kubenswrapper[4874]: I0217 17:30:29.100593 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:30 crc kubenswrapper[4874]: E0217 17:30:30.467663 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:30:31 crc kubenswrapper[4874]: E0217 17:30:31.552533 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:30:31 crc kubenswrapper[4874]: E0217 17:30:31.552874 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:30:31 crc kubenswrapper[4874]: E0217 17:30:31.553019 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:30:31 crc kubenswrapper[4874]: E0217 17:30:31.554549 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:30:39 crc kubenswrapper[4874]: I0217 17:30:39.093519 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:39 crc kubenswrapper[4874]: I0217 17:30:39.142568 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqg5p"] Feb 17 17:30:39 crc kubenswrapper[4874]: I0217 17:30:39.593112 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rqg5p" podUID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerName="registry-server" containerID="cri-o://58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f" gracePeriod=2 Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.130414 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.209112 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-catalog-content\") pod \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.209185 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8b6s\" (UniqueName: \"kubernetes.io/projected/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-kube-api-access-b8b6s\") pod \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.209333 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-utilities\") pod \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\" (UID: \"e6f8add2-c4c8-49fe-b9d7-54cd148986e3\") " Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.210424 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-utilities" (OuterVolumeSpecName: "utilities") pod "e6f8add2-c4c8-49fe-b9d7-54cd148986e3" (UID: "e6f8add2-c4c8-49fe-b9d7-54cd148986e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.215295 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-kube-api-access-b8b6s" (OuterVolumeSpecName: "kube-api-access-b8b6s") pod "e6f8add2-c4c8-49fe-b9d7-54cd148986e3" (UID: "e6f8add2-c4c8-49fe-b9d7-54cd148986e3"). InnerVolumeSpecName "kube-api-access-b8b6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.242169 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6f8add2-c4c8-49fe-b9d7-54cd148986e3" (UID: "e6f8add2-c4c8-49fe-b9d7-54cd148986e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.311995 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.312026 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8b6s\" (UniqueName: \"kubernetes.io/projected/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-kube-api-access-b8b6s\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.312040 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f8add2-c4c8-49fe-b9d7-54cd148986e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.603642 4874 generic.go:334] "Generic (PLEG): container finished" podID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerID="58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f" exitCode=0 Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.603706 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqg5p" event={"ID":"e6f8add2-c4c8-49fe-b9d7-54cd148986e3","Type":"ContainerDied","Data":"58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f"} Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.603741 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqg5p" event={"ID":"e6f8add2-c4c8-49fe-b9d7-54cd148986e3","Type":"ContainerDied","Data":"db3a9e81d0739bff38fbb43cb8147306c18ca3208cf2b642e92e7cb5a4a8e2f2"} Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.603785 4874 scope.go:117] "RemoveContainer" containerID="58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.603974 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqg5p" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.630906 4874 scope.go:117] "RemoveContainer" containerID="6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.636649 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqg5p"] Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.651568 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqg5p"] Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.664155 4874 scope.go:117] "RemoveContainer" containerID="a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.727945 4874 scope.go:117] "RemoveContainer" containerID="58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f" Feb 17 17:30:40 crc kubenswrapper[4874]: E0217 17:30:40.730519 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f\": container with ID starting with 58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f not found: ID does not exist" containerID="58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.730553 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f"} err="failed to get container status \"58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f\": rpc error: code = NotFound desc = could not find container \"58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f\": container with ID starting with 58d873eb9c3219c5e2724bd5bbb121dd94d9c55994349a28c8002eb11fb6c71f not found: ID does not exist" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.730578 4874 scope.go:117] "RemoveContainer" containerID="6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516" Feb 17 17:30:40 crc kubenswrapper[4874]: E0217 17:30:40.733068 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516\": container with ID starting with 6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516 not found: ID does not exist" containerID="6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.733126 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516"} err="failed to get container status \"6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516\": rpc error: code = NotFound desc = could not find container \"6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516\": container with ID starting with 6ccd80f3c2b905de53395dbce4888a71a0468371f88b9c8d0c4b994117d14516 not found: ID does not exist" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.733155 4874 scope.go:117] "RemoveContainer" containerID="a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9" Feb 17 17:30:40 crc kubenswrapper[4874]: E0217 17:30:40.733920 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9\": container with ID starting with a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9 not found: ID does not exist" containerID="a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9" Feb 17 17:30:40 crc kubenswrapper[4874]: I0217 17:30:40.733953 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9"} err="failed to get container status \"a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9\": rpc error: code = NotFound desc = could not find container \"a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9\": container with ID starting with a24cd83bfeeba99e233c2baf92f20ddfdcc21eaebca1f440db0de8eef93703e9 not found: ID does not exist" Feb 17 17:30:42 crc kubenswrapper[4874]: I0217 17:30:42.470531 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" path="/var/lib/kubelet/pods/e6f8add2-c4c8-49fe-b9d7-54cd148986e3/volumes" Feb 17 17:30:45 crc kubenswrapper[4874]: E0217 17:30:45.460500 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:30:47 crc kubenswrapper[4874]: E0217 17:30:47.459263 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:30:51 crc kubenswrapper[4874]: I0217 17:30:51.934423 4874 scope.go:117] "RemoveContainer" containerID="936006a49a7e5abf8a7fefc1ec5fa5e4fe14abc46bb56cf8ca642097cd240cb0" Feb 17 17:30:51 crc kubenswrapper[4874]: I0217 17:30:51.960620 4874 scope.go:117] "RemoveContainer" containerID="7091c3b22939079f364e73fbc6b128d6a71b7d8f8c0251470bc2d6e80a2527ac" Feb 17 17:30:57 crc kubenswrapper[4874]: I0217 17:30:57.724451 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:30:57 crc kubenswrapper[4874]: I0217 17:30:57.725048 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:30:57 crc kubenswrapper[4874]: I0217 17:30:57.725120 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 17:30:57 crc kubenswrapper[4874]: I0217 17:30:57.726027 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:30:57 crc kubenswrapper[4874]: I0217 17:30:57.726180 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" gracePeriod=600 Feb 17 17:30:57 crc kubenswrapper[4874]: E0217 17:30:57.855819 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:30:58 crc kubenswrapper[4874]: E0217 17:30:58.459210 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:30:58 crc kubenswrapper[4874]: E0217 17:30:58.459152 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:30:58 crc kubenswrapper[4874]: I0217 17:30:58.802130 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3"} Feb 17 17:30:58 crc kubenswrapper[4874]: I0217 17:30:58.802182 4874 scope.go:117] "RemoveContainer" containerID="383ca1f126863bbf4daedf6695c81049c8a1fa867632ee9a9a951570814663c2" Feb 17 17:30:58 crc kubenswrapper[4874]: I0217 17:30:58.802129 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" exitCode=0 Feb 17 17:30:58 crc kubenswrapper[4874]: I0217 17:30:58.803099 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:30:58 crc kubenswrapper[4874]: E0217 17:30:58.803475 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:31:12 crc kubenswrapper[4874]: I0217 17:31:12.457622 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:31:12 crc kubenswrapper[4874]: E0217 17:31:12.459341 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:31:12 crc kubenswrapper[4874]: E0217 17:31:12.459525 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:31:13 crc kubenswrapper[4874]: E0217 17:31:13.460807 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:31:15 crc kubenswrapper[4874]: I0217 17:31:15.629975 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_25c79f51-4cde-46f5-b188-618b368f0ccb/aodh-listener/0.log" Feb 17 17:31:15 crc kubenswrapper[4874]: I0217 17:31:15.646994 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_25c79f51-4cde-46f5-b188-618b368f0ccb/aodh-api/0.log" Feb 17 17:31:15 crc kubenswrapper[4874]: I0217 17:31:15.688771 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_25c79f51-4cde-46f5-b188-618b368f0ccb/aodh-evaluator/0.log" Feb 17 17:31:15 crc kubenswrapper[4874]: I0217 17:31:15.787028 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_25c79f51-4cde-46f5-b188-618b368f0ccb/aodh-notifier/0.log" Feb 17 17:31:15 crc kubenswrapper[4874]: I0217 17:31:15.876865 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f6fdb9858-5k876_d5e09eec-baf3-4a8f-8d05-95ee094a6c18/barbican-api/0.log" Feb 17 17:31:15 crc kubenswrapper[4874]: I0217 17:31:15.886287 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f6fdb9858-5k876_d5e09eec-baf3-4a8f-8d05-95ee094a6c18/barbican-api-log/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.087611 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-956d89d4-jvtqm_9bf086f0-8328-440d-b607-66c3db544871/barbican-keystone-listener/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.120049 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-956d89d4-jvtqm_9bf086f0-8328-440d-b607-66c3db544871/barbican-keystone-listener-log/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.223337 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c7cb8b4bf-4w9ct_955ecefb-40d6-42e2-acd6-133f1ecf251d/barbican-worker/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.262982 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c7cb8b4bf-4w9ct_955ecefb-40d6-42e2-acd6-133f1ecf251d/barbican-worker-log/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.380390 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-tgzkz_e27c106f-e640-4b2b-aab8-785a2bcb1624/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.593757 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cc29c300-b515-47d8-9326-1839ed7772b4/proxy-httpd/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.599280 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cc29c300-b515-47d8-9326-1839ed7772b4/ceilometer-notification-agent/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.652158 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cc29c300-b515-47d8-9326-1839ed7772b4/sg-core/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.799945 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_57c836de-513c-4aca-956a-73dc02dafce8/cinder-api-log/0.log" Feb 17 17:31:16 crc kubenswrapper[4874]: I0217 17:31:16.862563 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_57c836de-513c-4aca-956a-73dc02dafce8/cinder-api/0.log" Feb 17 17:31:17 crc kubenswrapper[4874]: I0217 17:31:17.452568 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_76e7d623-3e9d-43fb-9413-5bb3b1b2aa33/cinder-scheduler/0.log" Feb 17 17:31:17 crc kubenswrapper[4874]: I0217 17:31:17.531270 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_76e7d623-3e9d-43fb-9413-5bb3b1b2aa33/probe/0.log" Feb 17 17:31:17 crc kubenswrapper[4874]: I0217 17:31:17.637069 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-4tg48_e1e1acdf-f464-4e6a-bfac-4109880de91a/init/0.log" Feb 17 17:31:17 crc kubenswrapper[4874]: I0217 17:31:17.824302 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-4tg48_e1e1acdf-f464-4e6a-bfac-4109880de91a/dnsmasq-dns/0.log" Feb 17 17:31:17 crc kubenswrapper[4874]: I0217 17:31:17.839917 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-4tg48_e1e1acdf-f464-4e6a-bfac-4109880de91a/init/0.log" Feb 17 17:31:17 crc kubenswrapper[4874]: I0217 17:31:17.867643 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-4hs65_4fc34eca-3b52-4650-9c09-3c17befa87d5/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:18 crc kubenswrapper[4874]: I0217 17:31:18.037854 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-585pl_8331d1e2-3512-4f93-a2aa-482f566f53c9/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:18 crc kubenswrapper[4874]: I0217 17:31:18.109400 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bhxk9_7489caf0-d625-4d40-829f-34558a80ad7a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:18 crc kubenswrapper[4874]: I0217 17:31:18.269580 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ldw5c_22a145b3-1fbd-43be-9c83-9a04d4506430/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:18 crc kubenswrapper[4874]: I0217 17:31:18.373904 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pfn67_2dcc01fd-e02a-4d5e-b2f3-6f641b39a35c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:18 crc kubenswrapper[4874]: I0217 17:31:18.516236 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rspxw_0d027e77-e298-4ee6-bad9-b12332cc3a81/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:18 crc kubenswrapper[4874]: I0217 17:31:18.587845 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xr64p_d7c983ae-0062-4104-b0b7-ee35f90aa93d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:18 crc kubenswrapper[4874]: I0217 17:31:18.799252 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_60de1cc2-3d8e-445b-b882-14385d944a1b/glance-httpd/0.log" Feb 17 17:31:18 crc kubenswrapper[4874]: I0217 17:31:18.801418 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_60de1cc2-3d8e-445b-b882-14385d944a1b/glance-log/0.log" Feb 17 17:31:18 crc kubenswrapper[4874]: I0217 17:31:18.970017 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_aa0847fc-7f03-4cfe-a655-7abf45945a22/glance-httpd/0.log" Feb 17 17:31:19 crc kubenswrapper[4874]: I0217 17:31:19.027934 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_aa0847fc-7f03-4cfe-a655-7abf45945a22/glance-log/0.log" Feb 17 17:31:19 crc kubenswrapper[4874]: I0217 17:31:19.532355 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-59c46f7ffb-7jfhs_fa32dc95-3565-4a8a-82e7-97b9eaea1b32/heat-engine/0.log" Feb 17 17:31:19 crc kubenswrapper[4874]: I0217 17:31:19.667750 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-684fb5885c-hr4m8_7cefe1b7-0d9c-4594-8368-15179b55592b/heat-api/0.log" Feb 17 17:31:19 crc kubenswrapper[4874]: I0217 17:31:19.769810 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-77f9b8d4df-5ptz7_b8fa43d7-df5d-4fbe-97f4-95b8f20d5b71/heat-cfnapi/0.log" Feb 17 17:31:19 crc kubenswrapper[4874]: I0217 17:31:19.828047 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-567c8c9c6c-dn66l_4a0c2f24-e449-460d-8bcd-269d5ee4994f/keystone-api/0.log" Feb 17 17:31:19 crc kubenswrapper[4874]: I0217 17:31:19.910348 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522461-cnc94_61ff92e4-19df-453b-a07f-d3d953b6bacd/keystone-cron/0.log" Feb 17 17:31:20 crc kubenswrapper[4874]: I0217 17:31:20.000238 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a5372d7e-96f7-49b9-84e2-8ef268e00405/kube-state-metrics/0.log" Feb 17 17:31:20 crc kubenswrapper[4874]: I0217 17:31:20.214865 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_2533da2e-d4db-450e-b6f6-d7bcaca25353/mysqld-exporter/0.log" Feb 17 17:31:20 crc kubenswrapper[4874]: I0217 17:31:20.356301 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fccd89f8f-mbtlk_185d59da-e2da-4eec-b721-03f1d211281b/neutron-api/0.log" Feb 17 17:31:20 crc kubenswrapper[4874]: I0217 17:31:20.432520 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fccd89f8f-mbtlk_185d59da-e2da-4eec-b721-03f1d211281b/neutron-httpd/0.log" Feb 17 17:31:20 crc kubenswrapper[4874]: I0217 17:31:20.735960 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_482dd97c-5a3b-4da4-98e4-f89c00605948/nova-api-log/0.log" Feb 17 17:31:20 crc kubenswrapper[4874]: I0217 17:31:20.830361 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_de9261d2-3f0c-40dc-bd1f-07c6216ea317/nova-cell0-conductor-conductor/0.log" Feb 17 17:31:21 crc kubenswrapper[4874]: I0217 17:31:21.053534 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_23118d30-bfc5-46b8-aaf6-b14b263104c9/nova-cell1-conductor-conductor/0.log" Feb 17 17:31:21 crc kubenswrapper[4874]: I0217 17:31:21.054935 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_482dd97c-5a3b-4da4-98e4-f89c00605948/nova-api-api/0.log" Feb 17 17:31:21 crc kubenswrapper[4874]: I0217 17:31:21.203620 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c485f7e2-b876-413e-99c2-f67cd5ecd092/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 17:31:21 crc kubenswrapper[4874]: I0217 17:31:21.385330 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5c35948a-7c46-4998-9156-2fdedcaac5e9/nova-metadata-log/0.log" Feb 17 17:31:21 crc kubenswrapper[4874]: I0217 17:31:21.620019 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2e2092ca-d8a4-49b2-a40f-5a487ebcdab0/nova-scheduler-scheduler/0.log" Feb 17 17:31:21 crc kubenswrapper[4874]: I0217 17:31:21.662980 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9535b3e4-e580-4939-9f0f-f57e7b3946c6/mysql-bootstrap/0.log" Feb 17 17:31:21 crc kubenswrapper[4874]: I0217 17:31:21.837648 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9535b3e4-e580-4939-9f0f-f57e7b3946c6/galera/0.log" Feb 17 17:31:21 crc kubenswrapper[4874]: I0217 17:31:21.941118 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9535b3e4-e580-4939-9f0f-f57e7b3946c6/mysql-bootstrap/0.log" Feb 17 17:31:22 crc kubenswrapper[4874]: I0217 17:31:22.074282 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c99a20bb-50d6-4806-ac2a-2e2276d561ef/mysql-bootstrap/0.log" Feb 17 17:31:22 crc kubenswrapper[4874]: I0217 17:31:22.207504 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c99a20bb-50d6-4806-ac2a-2e2276d561ef/mysql-bootstrap/0.log" Feb 17 17:31:22 crc kubenswrapper[4874]: I0217 17:31:22.241865 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c99a20bb-50d6-4806-ac2a-2e2276d561ef/galera/0.log" Feb 17 17:31:22 crc kubenswrapper[4874]: I0217 17:31:22.432119 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ad509da0-c1a5-4dee-828c-783853098ee5/openstackclient/0.log" Feb 17 17:31:22 crc kubenswrapper[4874]: I0217 17:31:22.558230 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rb8lr_8a7189b3-10c5-4fe6-99c9-f3ec64fe159b/openstack-network-exporter/0.log" Feb 17 17:31:22 crc kubenswrapper[4874]: I0217 17:31:22.741497 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pzc25_80cf5dc3-e4d1-4d7c-b598-36a083080a66/ovsdb-server-init/0.log" Feb 17 17:31:22 crc kubenswrapper[4874]: I0217 17:31:22.908944 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pzc25_80cf5dc3-e4d1-4d7c-b598-36a083080a66/ovsdb-server-init/0.log" Feb 17 17:31:22 crc kubenswrapper[4874]: I0217 17:31:22.948320 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pzc25_80cf5dc3-e4d1-4d7c-b598-36a083080a66/ovs-vswitchd/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.010700 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pzc25_80cf5dc3-e4d1-4d7c-b598-36a083080a66/ovsdb-server/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.192930 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-tpgc2_4132e8e3-7498-4df0-9d6d-2dd7c096218a/ovn-controller/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.221630 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5c35948a-7c46-4998-9156-2fdedcaac5e9/nova-metadata-metadata/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.317366 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_48dbc25d-e454-452c-9912-f08d7569ecfa/openstack-network-exporter/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.394442 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_48dbc25d-e454-452c-9912-f08d7569ecfa/ovn-northd/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.495576 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f95c3b85-c546-47d7-9b75-7577455ab464/openstack-network-exporter/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.571813 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f95c3b85-c546-47d7-9b75-7577455ab464/ovsdbserver-nb/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.725235 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c2b366ca-9778-45e9-8d34-5708857a85cc/openstack-network-exporter/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.735426 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c2b366ca-9778-45e9-8d34-5708857a85cc/ovsdbserver-sb/0.log" Feb 17 17:31:23 crc kubenswrapper[4874]: I0217 17:31:23.990289 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-ffffff886-rsf5g_f9e74f73-675f-46bf-8a70-cd1101995839/placement-api/0.log" Feb 17 17:31:24 crc kubenswrapper[4874]: I0217 17:31:24.049356 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8e1e887d-4629-4a8a-812f-4f6f2d101249/init-config-reloader/0.log" Feb 17 17:31:24 crc kubenswrapper[4874]: I0217 17:31:24.065039 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-ffffff886-rsf5g_f9e74f73-675f-46bf-8a70-cd1101995839/placement-log/0.log" Feb 17 17:31:24 crc kubenswrapper[4874]: I0217 17:31:24.304739 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8e1e887d-4629-4a8a-812f-4f6f2d101249/prometheus/0.log" Feb 17 17:31:24 crc kubenswrapper[4874]: I0217 17:31:24.317816 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8e1e887d-4629-4a8a-812f-4f6f2d101249/config-reloader/0.log" Feb 17 17:31:24 crc kubenswrapper[4874]: I0217 17:31:24.339071 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8e1e887d-4629-4a8a-812f-4f6f2d101249/init-config-reloader/0.log" Feb 17 17:31:24 crc kubenswrapper[4874]: I0217 17:31:24.459836 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:31:24 crc kubenswrapper[4874]: E0217 17:31:24.460060 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:31:24 crc kubenswrapper[4874]: I0217 17:31:24.593521 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8e1e887d-4629-4a8a-812f-4f6f2d101249/thanos-sidecar/0.log" Feb 17 17:31:24 crc kubenswrapper[4874]: I0217 17:31:24.729622 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_efb51498-72fd-4e39-8bdd-dda0b1abe44a/setup-container/0.log" Feb 17 17:31:24 crc kubenswrapper[4874]: I0217 17:31:24.953506 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7d60895a-5f07-4e03-8f98-dc92137c65d4/setup-container/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.018296 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_efb51498-72fd-4e39-8bdd-dda0b1abe44a/setup-container/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.027748 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_efb51498-72fd-4e39-8bdd-dda0b1abe44a/rabbitmq/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.217567 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7d60895a-5f07-4e03-8f98-dc92137c65d4/setup-container/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.304362 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7d60895a-5f07-4e03-8f98-dc92137c65d4/rabbitmq/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.331016 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_aafddb04-57ad-45b6-8a34-30898a8bafff/setup-container/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.592338 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_850560f1-d14c-45d2-9526-e7aa266d3427/setup-container/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.596910 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_aafddb04-57ad-45b6-8a34-30898a8bafff/rabbitmq/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.599485 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_aafddb04-57ad-45b6-8a34-30898a8bafff/setup-container/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.853029 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_850560f1-d14c-45d2-9526-e7aa266d3427/setup-container/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.888010 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_850560f1-d14c-45d2-9526-e7aa266d3427/rabbitmq/0.log" Feb 17 17:31:25 crc kubenswrapper[4874]: I0217 17:31:25.940916 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8hf76_eee3af83-dd4f-4fa9-b1d9-f3e197174816/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.054638 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lr7xb_ebd0edb1-118f-426b-96ef-72db8d6c2b90/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.258448 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-558b9bddc9-tks6t_86d966a5-1838-4efd-bc2e-f19189a61789/proxy-server/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.415598 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vj2t6_99f3c575-721c-4e73-a4e3-e5497e1a3201/swift-ring-rebalance/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.418439 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-558b9bddc9-tks6t_86d966a5-1838-4efd-bc2e-f19189a61789/proxy-httpd/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: E0217 17:31:26.460364 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.624896 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/account-auditor/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.648115 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/account-reaper/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.734998 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/account-replicator/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.775005 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/account-server/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.849131 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/container-auditor/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.962502 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/container-server/0.log" Feb 17 17:31:26 crc kubenswrapper[4874]: I0217 17:31:26.980733 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/container-replicator/0.log" Feb 17 17:31:27 crc kubenswrapper[4874]: I0217 17:31:27.020367 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/container-updater/0.log" Feb 17 17:31:27 crc kubenswrapper[4874]: I0217 17:31:27.136952 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/object-auditor/0.log" Feb 17 17:31:27 crc kubenswrapper[4874]: I0217 17:31:27.221701 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/object-expirer/0.log" Feb 17 17:31:27 crc kubenswrapper[4874]: I0217 17:31:27.266310 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/object-server/0.log" Feb 17 17:31:27 crc kubenswrapper[4874]: I0217 17:31:27.274048 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/object-replicator/0.log" Feb 17 17:31:27 crc kubenswrapper[4874]: I0217 17:31:27.367709 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/object-updater/0.log" Feb 17 17:31:27 crc kubenswrapper[4874]: I0217 17:31:27.447850 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/swift-recon-cron/0.log" Feb 17 17:31:27 crc kubenswrapper[4874]: I0217 17:31:27.485725 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7fda3013-2526-48c1-ba34-9e8d1bb33e9f/rsync/0.log" Feb 17 17:31:28 crc kubenswrapper[4874]: E0217 17:31:28.459009 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:31:33 crc kubenswrapper[4874]: I0217 17:31:33.561036 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9093ae6e-39ee-47ca-b0d2-944be9ce4971/memcached/0.log" Feb 17 17:31:38 crc kubenswrapper[4874]: I0217 17:31:38.457858 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:31:38 crc kubenswrapper[4874]: E0217 17:31:38.460978 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:31:39 crc kubenswrapper[4874]: E0217 17:31:39.460031 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:31:42 crc kubenswrapper[4874]: E0217 17:31:42.459958 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:31:50 crc kubenswrapper[4874]: E0217 17:31:50.472632 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:31:51 crc kubenswrapper[4874]: I0217 17:31:51.456949 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:31:51 crc kubenswrapper[4874]: E0217 17:31:51.457622 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:31:52 crc kubenswrapper[4874]: I0217 17:31:52.124665 4874 scope.go:117] "RemoveContainer" containerID="77f368bc72d0a53dccb7fa1b7b92101a4e0bc851e1c090bf3688c535d30f4e77" Feb 17 17:31:52 crc kubenswrapper[4874]: I0217 17:31:52.755218 4874 scope.go:117] "RemoveContainer" containerID="09b01ae278a2cc3b07cbbae28811174c626600d3f834169b0760ff2dc30e3827" Feb 17 17:31:53 crc kubenswrapper[4874]: E0217 17:31:53.458658 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:31:57 crc kubenswrapper[4874]: I0217 17:31:57.943792 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m_d8622c37-b6c8-4b87-a9b6-30e7ee12af20/util/0.log" Feb 17 17:31:58 crc kubenswrapper[4874]: I0217 17:31:58.450312 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m_d8622c37-b6c8-4b87-a9b6-30e7ee12af20/pull/0.log" Feb 17 17:31:58 crc kubenswrapper[4874]: I0217 17:31:58.456877 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m_d8622c37-b6c8-4b87-a9b6-30e7ee12af20/util/0.log" Feb 17 17:31:58 crc kubenswrapper[4874]: I0217 17:31:58.497845 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m_d8622c37-b6c8-4b87-a9b6-30e7ee12af20/pull/0.log" Feb 17 17:31:58 crc kubenswrapper[4874]: I0217 17:31:58.651863 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m_d8622c37-b6c8-4b87-a9b6-30e7ee12af20/util/0.log" Feb 17 17:31:58 crc kubenswrapper[4874]: I0217 17:31:58.652588 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m_d8622c37-b6c8-4b87-a9b6-30e7ee12af20/extract/0.log" Feb 17 17:31:58 crc kubenswrapper[4874]: I0217 17:31:58.661598 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933d456m_d8622c37-b6c8-4b87-a9b6-30e7ee12af20/pull/0.log" Feb 17 17:31:59 crc kubenswrapper[4874]: I0217 17:31:59.107755 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-jn9cr_3f567ee8-98ac-44f3-bba2-4dfd8b514ab2/manager/0.log" Feb 17 17:31:59 crc kubenswrapper[4874]: I0217 17:31:59.471737 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-w7lcj_aa81f594-f3c2-43d6-ac9b-6a51e36e8d99/manager/0.log" Feb 17 17:31:59 crc kubenswrapper[4874]: I0217 17:31:59.885599 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-qtzmv_6873354d-473a-4bf1-b8d3-f728e268bd36/manager/0.log" Feb 17 17:32:00 crc kubenswrapper[4874]: I0217 17:32:00.416485 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-87l78_fd0e6a7f-7fe4-4790-a3a8-d973386bec13/manager/0.log" Feb 17 17:32:01 crc kubenswrapper[4874]: I0217 17:32:01.531953 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-lv9qv_b87e1102-63f8-4f2f-9376-dab7745fb4b2/manager/0.log" Feb 17 17:32:01 crc kubenswrapper[4874]: I0217 17:32:01.821929 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-qldzr_1127a6be-ce6c-498b-bd8c-7a131b575321/manager/0.log" Feb 17 17:32:02 crc kubenswrapper[4874]: I0217 17:32:02.157096 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-l28fh_95fa7fde-cb3d-4b2d-ac02-f58440c35c7b/manager/0.log" Feb 17 17:32:02 crc kubenswrapper[4874]: I0217 17:32:02.187557 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-xgkkx_db6537c6-cc88-4848-a428-ad573290cc02/manager/0.log" Feb 17 17:32:02 crc kubenswrapper[4874]: I0217 17:32:02.205234 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-tzsfx_62899d98-d8f9-4669-90f1-d4e9e02280aa/manager/0.log" Feb 17 17:32:02 crc kubenswrapper[4874]: E0217 17:32:02.461376 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:32:02 crc kubenswrapper[4874]: I0217 17:32:02.461681 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-dbzhs_25da1eba-df74-4c90-90be-bb79065c4557/manager/0.log" Feb 17 17:32:02 crc kubenswrapper[4874]: I0217 17:32:02.577505 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-f5m2c_3603fb35-facf-4a38-8fa1-ce1efa386258/manager/0.log" Feb 17 17:32:03 crc kubenswrapper[4874]: I0217 17:32:03.317972 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-jgrrk_9fdb9bed-5948-4441-a15b-34df4351b88c/manager/0.log" Feb 17 17:32:03 crc kubenswrapper[4874]: I0217 17:32:03.680037 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cvvz57_01ab2d32-b155-4460-ace9-60d38242218b/manager/0.log" Feb 17 17:32:04 crc kubenswrapper[4874]: I0217 17:32:04.226909 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5b4d8b9dd-d9wb8_c19a7a72-ad6e-499e-ba9e-2b58b8ca2241/operator/0.log" Feb 17 17:32:04 crc kubenswrapper[4874]: I0217 17:32:04.373349 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j424d_0c982d3a-d8b0-44d9-82c2-d031d9e02af9/registry-server/0.log" Feb 17 17:32:04 crc kubenswrapper[4874]: I0217 17:32:04.687760 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-jz5hv_e8e4298f-581a-4fdf-8347-088b955fb6ba/manager/0.log" Feb 17 17:32:04 crc kubenswrapper[4874]: I0217 17:32:04.915096 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-qhh9h_2ea7f298-dafe-4448-8ffe-a2194f127c12/manager/0.log" Feb 17 17:32:05 crc kubenswrapper[4874]: I0217 17:32:05.147907 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxglc_91060dec-59cf-4cec-90e3-e14e10456304/operator/0.log" Feb 17 17:32:05 crc kubenswrapper[4874]: I0217 17:32:05.442444 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-hk542_73bebada-8e5b-4539-b609-2b64e42fdc35/manager/0.log" Feb 17 17:32:05 crc kubenswrapper[4874]: I0217 17:32:05.456998 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:32:05 crc kubenswrapper[4874]: E0217 17:32:05.457334 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:32:05 crc kubenswrapper[4874]: I0217 17:32:05.924344 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-wq4gk_bd668570-bbe9-4494-a20d-fd49f91dc656/manager/0.log" Feb 17 17:32:06 crc kubenswrapper[4874]: I0217 17:32:06.024754 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-66554dbdcf-njv9r_bb7619d6-0f36-44aa-82f3-5375a806ae94/manager/0.log" Feb 17 17:32:06 crc kubenswrapper[4874]: I0217 17:32:06.201838 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d7c6cd576-cm8t8_e9edd0a5-e9e7-4604-83e9-466212623115/manager/0.log" Feb 17 17:32:06 crc kubenswrapper[4874]: I0217 17:32:06.245526 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-m7xvs_005d51f3-7446-454e-81ae-3cc46edc3aec/manager/0.log" Feb 17 17:32:06 crc kubenswrapper[4874]: E0217 17:32:06.463065 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:32:06 crc kubenswrapper[4874]: I0217 17:32:06.628466 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-75swn_f9447f8b-df93-499d-87cd-4ccb1894c291/manager/0.log" Feb 17 17:32:12 crc kubenswrapper[4874]: I0217 17:32:12.577544 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-gxjgl_c4c6b874-8781-4030-a651-54feaeed2634/manager/0.log" Feb 17 17:32:15 crc kubenswrapper[4874]: E0217 17:32:15.458858 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:32:17 crc kubenswrapper[4874]: I0217 17:32:17.457938 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:32:17 crc kubenswrapper[4874]: E0217 17:32:17.458621 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:32:21 crc kubenswrapper[4874]: E0217 17:32:21.459437 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:32:27 crc kubenswrapper[4874]: E0217 17:32:27.459696 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:32:30 crc kubenswrapper[4874]: I0217 17:32:30.407848 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mfmbh_52e48cb6-3564-41f7-8030-f54482605065/control-plane-machine-set-operator/0.log" Feb 17 17:32:30 crc kubenswrapper[4874]: I0217 17:32:30.614900 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rxw56_70fceb62-f510-491f-a04c-0a2efd5439f7/kube-rbac-proxy/0.log" Feb 17 17:32:30 crc kubenswrapper[4874]: I0217 17:32:30.660062 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rxw56_70fceb62-f510-491f-a04c-0a2efd5439f7/machine-api-operator/0.log" Feb 17 17:32:31 crc kubenswrapper[4874]: I0217 17:32:31.457538 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:32:31 crc kubenswrapper[4874]: E0217 17:32:31.458223 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:32:35 crc kubenswrapper[4874]: E0217 17:32:35.461052 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:32:41 crc kubenswrapper[4874]: E0217 17:32:41.459772 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:32:45 crc kubenswrapper[4874]: I0217 17:32:45.933501 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-6cb67_94ae53c8-1b30-492d-945b-e194492623fd/cert-manager-controller/0.log" Feb 17 17:32:46 crc kubenswrapper[4874]: I0217 17:32:46.162032 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-b2f79_0843178e-0046-48d7-9f4b-44ac0deb0f89/cert-manager-cainjector/0.log" Feb 17 17:32:46 crc kubenswrapper[4874]: I0217 17:32:46.220497 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dzcwt_ff3840bb-f767-4d1f-ae3f-7e39a0c94ef3/cert-manager-webhook/0.log" Feb 17 17:32:46 crc kubenswrapper[4874]: I0217 17:32:46.457235 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:32:46 crc kubenswrapper[4874]: E0217 17:32:46.457833 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:32:47 crc kubenswrapper[4874]: E0217 17:32:47.459556 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:32:56 crc kubenswrapper[4874]: E0217 17:32:56.459663 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:32:59 crc kubenswrapper[4874]: I0217 17:32:59.975496 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-p7nj4_1c6543ed-090e-4099-931a-d82e47304681/nmstate-console-plugin/0.log" Feb 17 17:33:00 crc kubenswrapper[4874]: I0217 17:33:00.106833 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-njd2b_098dd26d-2e61-473f-bbe8-47be863f5b45/nmstate-handler/0.log" Feb 17 17:33:00 crc kubenswrapper[4874]: I0217 17:33:00.170188 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-gnnhx_1dd205b6-4b48-4e5c-8731-d4322d8eba49/kube-rbac-proxy/0.log" Feb 17 17:33:00 crc kubenswrapper[4874]: I0217 17:33:00.224923 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-gnnhx_1dd205b6-4b48-4e5c-8731-d4322d8eba49/nmstate-metrics/0.log" Feb 17 17:33:00 crc kubenswrapper[4874]: I0217 17:33:00.452988 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-92lgl_16090473-6fc6-45cd-a577-ed241b1e7c60/nmstate-operator/0.log" Feb 17 17:33:00 crc kubenswrapper[4874]: I0217 17:33:00.649570 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-kq2cl_1b31ad9f-374d-495a-85a8-161930a8dc23/nmstate-webhook/0.log" Feb 17 17:33:01 crc kubenswrapper[4874]: I0217 17:33:01.493001 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:33:01 crc kubenswrapper[4874]: E0217 17:33:01.493402 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:33:01 crc kubenswrapper[4874]: E0217 17:33:01.498847 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:33:11 crc kubenswrapper[4874]: E0217 17:33:11.460032 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:33:14 crc kubenswrapper[4874]: E0217 17:33:14.460270 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:33:15 crc kubenswrapper[4874]: I0217 17:33:15.200980 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-745c8c7958-q4zx9_e55e7660-9281-484b-b0b8-a39236b8e692/kube-rbac-proxy/0.log" Feb 17 17:33:15 crc kubenswrapper[4874]: I0217 17:33:15.284364 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-745c8c7958-q4zx9_e55e7660-9281-484b-b0b8-a39236b8e692/manager/0.log" Feb 17 17:33:16 crc kubenswrapper[4874]: I0217 17:33:16.464330 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:33:16 crc kubenswrapper[4874]: E0217 17:33:16.465198 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:33:25 crc kubenswrapper[4874]: E0217 17:33:25.460993 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:33:25 crc kubenswrapper[4874]: E0217 17:33:25.461125 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:33:28 crc kubenswrapper[4874]: I0217 17:33:28.457587 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:33:28 crc kubenswrapper[4874]: E0217 17:33:28.459034 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:33:29 crc kubenswrapper[4874]: I0217 17:33:29.192144 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fjkwc_cf7f0be2-b792-4603-a97c-53a2f335acee/prometheus-operator/0.log" Feb 17 17:33:29 crc kubenswrapper[4874]: I0217 17:33:29.413131 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_5178b00a-11f3-48c6-96be-459a7b26be82/prometheus-operator-admission-webhook/0.log" Feb 17 17:33:29 crc kubenswrapper[4874]: I0217 17:33:29.440596 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_7a893bee-81e5-480e-8414-43a823e768fd/prometheus-operator-admission-webhook/0.log" Feb 17 17:33:29 crc kubenswrapper[4874]: I0217 17:33:29.621644 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-mpn47_4771c857-23aa-4647-a63d-d7a1977ffaa4/observability-ui-dashboards/0.log" Feb 17 17:33:29 crc kubenswrapper[4874]: I0217 17:33:29.636126 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-2b8tl_660c5439-82eb-4696-9df3-7968e680b5a9/operator/0.log" Feb 17 17:33:29 crc kubenswrapper[4874]: I0217 17:33:29.805191 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-b988z_a3d284b8-a322-4ce7-9a33-c82f3adafeb1/perses-operator/0.log" Feb 17 17:33:36 crc kubenswrapper[4874]: E0217 17:33:36.461120 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:33:39 crc kubenswrapper[4874]: E0217 17:33:39.460431 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:33:42 crc kubenswrapper[4874]: I0217 17:33:42.457822 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:33:42 crc kubenswrapper[4874]: E0217 17:33:42.458453 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:33:45 crc kubenswrapper[4874]: I0217 17:33:45.164041 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-5gdhf_3145a5e0-7e93-479a-b4f2-c7082813a0bf/cluster-logging-operator/0.log" Feb 17 17:33:45 crc kubenswrapper[4874]: I0217 17:33:45.381963 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-qfbx5_57b97733-9959-41a8-b1bc-a8dae79c1892/collector/0.log" Feb 17 17:33:45 crc kubenswrapper[4874]: I0217 17:33:45.477905 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_e5f54572-957d-428e-9c13-0f45aa7dc6e5/loki-compactor/0.log" Feb 17 17:33:45 crc kubenswrapper[4874]: I0217 17:33:45.687972 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-b69gh_d60c9d45-c4f3-4702-a479-c98e249e2eb4/loki-distributor/0.log" Feb 17 17:33:45 crc kubenswrapper[4874]: I0217 17:33:45.705955 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595f794c55-tvvjh_7ac7d0ae-7505-401d-a9cc-49094832b8c7/gateway/0.log" Feb 17 17:33:45 crc kubenswrapper[4874]: I0217 17:33:45.763471 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595f794c55-tvvjh_7ac7d0ae-7505-401d-a9cc-49094832b8c7/opa/0.log" Feb 17 17:33:45 crc kubenswrapper[4874]: I0217 17:33:45.874358 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595f794c55-vbzmt_641c0952-226b-4374-b247-f7e6a67f6cc8/opa/0.log" Feb 17 17:33:45 crc kubenswrapper[4874]: I0217 17:33:45.894608 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-595f794c55-vbzmt_641c0952-226b-4374-b247-f7e6a67f6cc8/gateway/0.log" Feb 17 17:33:46 crc kubenswrapper[4874]: I0217 17:33:46.029658 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_5215c52d-dda2-4bf6-bf99-dffdcc73f289/loki-index-gateway/0.log" Feb 17 17:33:46 crc kubenswrapper[4874]: I0217 17:33:46.145735 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_f0d776a8-9060-4156-931f-fcbe335a8488/loki-ingester/0.log" Feb 17 17:33:46 crc kubenswrapper[4874]: I0217 17:33:46.232330 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-qkpmn_bc6aa0d6-36c1-4a1b-b9a4-2a42abd2aee5/loki-querier/0.log" Feb 17 17:33:46 crc kubenswrapper[4874]: I0217 17:33:46.349027 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-2p4zr_c2549768-f32d-4e6e-91f7-9ba31ddd5998/loki-query-frontend/0.log" Feb 17 17:33:48 crc kubenswrapper[4874]: E0217 17:33:48.470717 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:33:51 crc kubenswrapper[4874]: E0217 17:33:51.459794 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:33:53 crc kubenswrapper[4874]: I0217 17:33:53.457749 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:33:53 crc kubenswrapper[4874]: E0217 17:33:53.459260 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.197120 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-frr-files/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.344569 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-n9vxs_73488a2d-521a-4ccd-a9ea-aa905b51e302/kube-rbac-proxy/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.365328 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-reloader/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: E0217 17:34:02.459363 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.539683 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-reloader/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.552113 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-frr-files/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.586020 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-metrics/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.621175 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-n9vxs_73488a2d-521a-4ccd-a9ea-aa905b51e302/controller/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.831373 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-metrics/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.841908 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-frr-files/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.846937 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-reloader/0.log" Feb 17 17:34:02 crc kubenswrapper[4874]: I0217 17:34:02.849229 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-metrics/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.102310 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-frr-files/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.102823 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-metrics/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.127292 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/cp-reloader/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.138385 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/controller/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.308443 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/kube-rbac-proxy/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.334475 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/frr-metrics/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.352314 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/kube-rbac-proxy-frr/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.539525 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/reloader/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.635992 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-6pq7d_abf374ec-8d79-48ac-ac9b-9cf5c81d0adf/frr-k8s-webhook-server/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.813863 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b5c586d76-ztwj8_1a2dc1cd-626b-4d07-8260-cbfd9dadfa93/manager/0.log" Feb 17 17:34:03 crc kubenswrapper[4874]: I0217 17:34:03.982813 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-756c97bbfd-pv5c9_8177791a-4dee-4a43-9868-c06e52c2b536/webhook-server/0.log" Feb 17 17:34:04 crc kubenswrapper[4874]: I0217 17:34:04.201884 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bbthf_1b81504e-be8e-4fbd-a5c6-c48ee4dea72b/kube-rbac-proxy/0.log" Feb 17 17:34:05 crc kubenswrapper[4874]: I0217 17:34:05.009860 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bbthf_1b81504e-be8e-4fbd-a5c6-c48ee4dea72b/speaker/0.log" Feb 17 17:34:05 crc kubenswrapper[4874]: I0217 17:34:05.207947 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4xgq6_feb8be07-358f-49c3-a27c-53054e353a5d/frr/0.log" Feb 17 17:34:05 crc kubenswrapper[4874]: E0217 17:34:05.460021 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:34:08 crc kubenswrapper[4874]: I0217 17:34:08.457683 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:34:08 crc kubenswrapper[4874]: E0217 17:34:08.460547 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:34:11 crc kubenswrapper[4874]: I0217 17:34:11.992794 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f5pj5"] Feb 17 17:34:11 crc kubenswrapper[4874]: E0217 17:34:11.993788 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerName="extract-utilities" Feb 17 17:34:11 crc kubenswrapper[4874]: I0217 17:34:11.993802 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerName="extract-utilities" Feb 17 17:34:11 crc kubenswrapper[4874]: E0217 17:34:11.993814 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerName="registry-server" Feb 17 17:34:11 crc kubenswrapper[4874]: I0217 17:34:11.993821 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerName="registry-server" Feb 17 17:34:11 crc kubenswrapper[4874]: E0217 17:34:11.993861 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerName="extract-content" Feb 17 17:34:11 crc kubenswrapper[4874]: I0217 17:34:11.993867 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerName="extract-content" Feb 17 17:34:11 crc kubenswrapper[4874]: I0217 17:34:11.994107 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f8add2-c4c8-49fe-b9d7-54cd148986e3" containerName="registry-server" Feb 17 17:34:11 crc kubenswrapper[4874]: I0217 17:34:11.995684 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.001706 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgzs\" (UniqueName: \"kubernetes.io/projected/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-kube-api-access-8lgzs\") pod \"certified-operators-f5pj5\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.001793 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-catalog-content\") pod \"certified-operators-f5pj5\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.001992 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-utilities\") pod \"certified-operators-f5pj5\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.005149 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5pj5"] Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.104327 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lgzs\" (UniqueName: \"kubernetes.io/projected/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-kube-api-access-8lgzs\") pod \"certified-operators-f5pj5\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.104479 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-catalog-content\") pod \"certified-operators-f5pj5\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.104537 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-utilities\") pod \"certified-operators-f5pj5\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.105200 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-utilities\") pod \"certified-operators-f5pj5\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.105292 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-catalog-content\") pod \"certified-operators-f5pj5\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.458060 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lgzs\" (UniqueName: \"kubernetes.io/projected/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-kube-api-access-8lgzs\") pod \"certified-operators-f5pj5\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:12 crc kubenswrapper[4874]: I0217 17:34:12.626529 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:13 crc kubenswrapper[4874]: I0217 17:34:13.114182 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5pj5"] Feb 17 17:34:13 crc kubenswrapper[4874]: I0217 17:34:13.258589 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5pj5" event={"ID":"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc","Type":"ContainerStarted","Data":"dc55a15df2aef502f53c2d924bb1b9755458ce17b7bf9bb3680c58bfc89a4b35"} Feb 17 17:34:14 crc kubenswrapper[4874]: I0217 17:34:14.273045 4874 generic.go:334] "Generic (PLEG): container finished" podID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerID="f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe" exitCode=0 Feb 17 17:34:14 crc kubenswrapper[4874]: I0217 17:34:14.273128 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5pj5" event={"ID":"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc","Type":"ContainerDied","Data":"f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe"} Feb 17 17:34:14 crc kubenswrapper[4874]: E0217 17:34:14.459210 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:34:15 crc kubenswrapper[4874]: I0217 17:34:15.283890 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5pj5" event={"ID":"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc","Type":"ContainerStarted","Data":"232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc"} Feb 17 17:34:17 crc kubenswrapper[4874]: I0217 17:34:17.306691 4874 generic.go:334] "Generic (PLEG): container finished" podID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerID="232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc" exitCode=0 Feb 17 17:34:17 crc kubenswrapper[4874]: I0217 17:34:17.306761 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5pj5" event={"ID":"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc","Type":"ContainerDied","Data":"232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc"} Feb 17 17:34:18 crc kubenswrapper[4874]: I0217 17:34:18.317299 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5pj5" event={"ID":"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc","Type":"ContainerStarted","Data":"92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b"} Feb 17 17:34:18 crc kubenswrapper[4874]: I0217 17:34:18.339953 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f5pj5" podStartSLOduration=3.9246291429999998 podStartE2EDuration="7.339932135s" podCreationTimestamp="2026-02-17 17:34:11 +0000 UTC" firstStartedPulling="2026-02-17 17:34:14.275695518 +0000 UTC m=+5464.570084079" lastFinishedPulling="2026-02-17 17:34:17.69099851 +0000 UTC m=+5467.985387071" observedRunningTime="2026-02-17 17:34:18.334169732 +0000 UTC m=+5468.628558313" watchObservedRunningTime="2026-02-17 17:34:18.339932135 +0000 UTC m=+5468.634320706" Feb 17 17:34:19 crc kubenswrapper[4874]: I0217 17:34:19.133690 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd_5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218/util/0.log" Feb 17 17:34:19 crc kubenswrapper[4874]: I0217 17:34:19.286674 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd_5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218/util/0.log" Feb 17 17:34:19 crc kubenswrapper[4874]: I0217 17:34:19.334567 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd_5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218/pull/0.log" Feb 17 17:34:19 crc kubenswrapper[4874]: I0217 17:34:19.347197 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd_5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218/pull/0.log" Feb 17 17:34:19 crc kubenswrapper[4874]: E0217 17:34:19.459997 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:34:19 crc kubenswrapper[4874]: I0217 17:34:19.615646 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd_5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218/util/0.log" Feb 17 17:34:19 crc kubenswrapper[4874]: I0217 17:34:19.616265 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd_5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218/pull/0.log" Feb 17 17:34:19 crc kubenswrapper[4874]: I0217 17:34:19.745307 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e199hntd_5c2f1cb3-dfb2-4a9a-b7be-3ddfa0095218/extract/0.log" Feb 17 17:34:19 crc kubenswrapper[4874]: I0217 17:34:19.864827 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5_da9d156e-7c39-4ea0-80a3-3046c65ec615/util/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.006696 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5_da9d156e-7c39-4ea0-80a3-3046c65ec615/util/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.027024 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5_da9d156e-7c39-4ea0-80a3-3046c65ec615/pull/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.032492 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5_da9d156e-7c39-4ea0-80a3-3046c65ec615/pull/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.178069 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5_da9d156e-7c39-4ea0-80a3-3046c65ec615/pull/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.210642 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5_da9d156e-7c39-4ea0-80a3-3046c65ec615/extract/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.211903 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dfls5_da9d156e-7c39-4ea0-80a3-3046c65ec615/util/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.796342 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb_14fe6365-4102-4b73-a3ee-c2722b3317e0/util/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.914776 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb_14fe6365-4102-4b73-a3ee-c2722b3317e0/util/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.940381 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb_14fe6365-4102-4b73-a3ee-c2722b3317e0/pull/0.log" Feb 17 17:34:20 crc kubenswrapper[4874]: I0217 17:34:20.963701 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb_14fe6365-4102-4b73-a3ee-c2722b3317e0/pull/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.139736 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb_14fe6365-4102-4b73-a3ee-c2722b3317e0/util/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.158862 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb_14fe6365-4102-4b73-a3ee-c2722b3317e0/extract/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.175783 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m52bb_14fe6365-4102-4b73-a3ee-c2722b3317e0/pull/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.310408 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95q59_fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1/extract-utilities/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.501140 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95q59_fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1/extract-utilities/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.547465 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95q59_fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1/extract-content/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.549163 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95q59_fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1/extract-content/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.729847 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95q59_fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1/extract-content/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.801135 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95q59_fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1/extract-utilities/0.log" Feb 17 17:34:21 crc kubenswrapper[4874]: I0217 17:34:21.989574 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5pj5_8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc/extract-utilities/0.log" Feb 17 17:34:22 crc kubenswrapper[4874]: I0217 17:34:22.227719 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5pj5_8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc/extract-content/0.log" Feb 17 17:34:22 crc kubenswrapper[4874]: I0217 17:34:22.240059 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5pj5_8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc/extract-utilities/0.log" Feb 17 17:34:22 crc kubenswrapper[4874]: I0217 17:34:22.267302 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5pj5_8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc/extract-content/0.log" Feb 17 17:34:22 crc kubenswrapper[4874]: I0217 17:34:22.457373 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:34:22 crc kubenswrapper[4874]: E0217 17:34:22.457614 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:34:22 crc kubenswrapper[4874]: I0217 17:34:22.569730 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-95q59_fdb34b07-ca7a-4ccd-8e89-8ca0a1c51ee1/registry-server/0.log" Feb 17 17:34:22 crc kubenswrapper[4874]: I0217 17:34:22.627653 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:22 crc kubenswrapper[4874]: I0217 17:34:22.627691 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.183427 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.184494 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5pj5_8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc/extract-content/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.234473 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5pj5_8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc/registry-server/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.242224 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f5pj5_8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc/extract-utilities/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.301780 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7h5mh_053b3c4e-8d22-4a31-ba82-2c00f2bcf76f/extract-utilities/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.431798 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.482906 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7h5mh_053b3c4e-8d22-4a31-ba82-2c00f2bcf76f/extract-utilities/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.496065 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7h5mh_053b3c4e-8d22-4a31-ba82-2c00f2bcf76f/extract-content/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.498904 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5pj5"] Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.502262 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7h5mh_053b3c4e-8d22-4a31-ba82-2c00f2bcf76f/extract-content/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.670659 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7h5mh_053b3c4e-8d22-4a31-ba82-2c00f2bcf76f/extract-utilities/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.686663 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7h5mh_053b3c4e-8d22-4a31-ba82-2c00f2bcf76f/extract-content/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.784225 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf_f088c918-cdda-43a2-aae0-3910c4f0e2b3/util/0.log" Feb 17 17:34:23 crc kubenswrapper[4874]: I0217 17:34:23.965933 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf_f088c918-cdda-43a2-aae0-3910c4f0e2b3/util/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.010118 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf_f088c918-cdda-43a2-aae0-3910c4f0e2b3/pull/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.027093 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf_f088c918-cdda-43a2-aae0-3910c4f0e2b3/pull/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.164421 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf_f088c918-cdda-43a2-aae0-3910c4f0e2b3/util/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.226145 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf_f088c918-cdda-43a2-aae0-3910c4f0e2b3/extract/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.252544 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989v6pvf_f088c918-cdda-43a2-aae0-3910c4f0e2b3/pull/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.375628 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm_614d03d4-1cdd-46f3-99fa-c6e4ec0bc851/util/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.586710 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm_614d03d4-1cdd-46f3-99fa-c6e4ec0bc851/pull/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.598955 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7h5mh_053b3c4e-8d22-4a31-ba82-2c00f2bcf76f/registry-server/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.607258 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm_614d03d4-1cdd-46f3-99fa-c6e4ec0bc851/util/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.621595 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm_614d03d4-1cdd-46f3-99fa-c6e4ec0bc851/pull/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.746745 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm_614d03d4-1cdd-46f3-99fa-c6e4ec0bc851/util/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.762387 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm_614d03d4-1cdd-46f3-99fa-c6e4ec0bc851/pull/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.819469 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecabtphm_614d03d4-1cdd-46f3-99fa-c6e4ec0bc851/extract/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.820795 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-k2hdj_47fbef15-6f0f-42c9-89d2-b68a0bc8eb57/marketplace-operator/0.log" Feb 17 17:34:24 crc kubenswrapper[4874]: I0217 17:34:24.969549 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v9f4j_5d972eec-e9fa-4a61-bfca-998ada5663cd/extract-utilities/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.164460 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v9f4j_5d972eec-e9fa-4a61-bfca-998ada5663cd/extract-content/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.177533 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v9f4j_5d972eec-e9fa-4a61-bfca-998ada5663cd/extract-utilities/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.179861 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v9f4j_5d972eec-e9fa-4a61-bfca-998ada5663cd/extract-content/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.358476 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v9f4j_5d972eec-e9fa-4a61-bfca-998ada5663cd/extract-utilities/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.393505 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f5pj5" podUID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerName="registry-server" containerID="cri-o://92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b" gracePeriod=2 Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.401671 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v9f4j_5d972eec-e9fa-4a61-bfca-998ada5663cd/extract-content/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.412389 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6j6n7_4018f0d2-92f6-4fb2-9055-09a94ebd95a2/extract-utilities/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.520772 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v9f4j_5d972eec-e9fa-4a61-bfca-998ada5663cd/registry-server/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.641646 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6j6n7_4018f0d2-92f6-4fb2-9055-09a94ebd95a2/extract-utilities/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.706449 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6j6n7_4018f0d2-92f6-4fb2-9055-09a94ebd95a2/extract-content/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.706630 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6j6n7_4018f0d2-92f6-4fb2-9055-09a94ebd95a2/extract-content/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.853202 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6j6n7_4018f0d2-92f6-4fb2-9055-09a94ebd95a2/extract-utilities/0.log" Feb 17 17:34:25 crc kubenswrapper[4874]: I0217 17:34:25.905112 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6j6n7_4018f0d2-92f6-4fb2-9055-09a94ebd95a2/extract-content/0.log" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.009958 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.138236 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-catalog-content\") pod \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.138316 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-utilities\") pod \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.138389 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lgzs\" (UniqueName: \"kubernetes.io/projected/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-kube-api-access-8lgzs\") pod \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\" (UID: \"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc\") " Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.144340 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-utilities" (OuterVolumeSpecName: "utilities") pod "8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" (UID: "8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.157361 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-kube-api-access-8lgzs" (OuterVolumeSpecName: "kube-api-access-8lgzs") pod "8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" (UID: "8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc"). InnerVolumeSpecName "kube-api-access-8lgzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.234306 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" (UID: "8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.241198 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.241244 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lgzs\" (UniqueName: \"kubernetes.io/projected/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-kube-api-access-8lgzs\") on node \"crc\" DevicePath \"\"" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.241258 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.404999 4874 generic.go:334] "Generic (PLEG): container finished" podID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerID="92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b" exitCode=0 Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.405037 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5pj5" event={"ID":"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc","Type":"ContainerDied","Data":"92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b"} Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.405062 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5pj5" event={"ID":"8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc","Type":"ContainerDied","Data":"dc55a15df2aef502f53c2d924bb1b9755458ce17b7bf9bb3680c58bfc89a4b35"} Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.405092 4874 scope.go:117] "RemoveContainer" containerID="92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.405125 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5pj5" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.439042 4874 scope.go:117] "RemoveContainer" containerID="232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.445335 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5pj5"] Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.460141 4874 scope.go:117] "RemoveContainer" containerID="f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.483477 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f5pj5"] Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.517460 4874 scope.go:117] "RemoveContainer" containerID="92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b" Feb 17 17:34:26 crc kubenswrapper[4874]: E0217 17:34:26.519691 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b\": container with ID starting with 92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b not found: ID does not exist" containerID="92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.519767 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b"} err="failed to get container status \"92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b\": rpc error: code = NotFound desc = could not find container \"92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b\": container with ID starting with 92b33b99ec144d4d3fc6bc6e542e8de3a797ce5bde8afc844dff358cc255093b not found: ID does not exist" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.519805 4874 scope.go:117] "RemoveContainer" containerID="232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc" Feb 17 17:34:26 crc kubenswrapper[4874]: E0217 17:34:26.520197 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc\": container with ID starting with 232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc not found: ID does not exist" containerID="232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.520246 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc"} err="failed to get container status \"232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc\": rpc error: code = NotFound desc = could not find container \"232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc\": container with ID starting with 232c195ae8e933b3cb505d576137359f0bcb74a0ac20e890097ce0b76ba47fcc not found: ID does not exist" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.520260 4874 scope.go:117] "RemoveContainer" containerID="f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe" Feb 17 17:34:26 crc kubenswrapper[4874]: E0217 17:34:26.520669 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe\": container with ID starting with f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe not found: ID does not exist" containerID="f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.520692 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe"} err="failed to get container status \"f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe\": rpc error: code = NotFound desc = could not find container \"f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe\": container with ID starting with f453fc770752fc551b468d45152415c70b6ec1a6182390413eb8c0ce10ac2dbe not found: ID does not exist" Feb 17 17:34:26 crc kubenswrapper[4874]: I0217 17:34:26.561358 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6j6n7_4018f0d2-92f6-4fb2-9055-09a94ebd95a2/registry-server/0.log" Feb 17 17:34:28 crc kubenswrapper[4874]: I0217 17:34:28.471400 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" path="/var/lib/kubelet/pods/8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc/volumes" Feb 17 17:34:29 crc kubenswrapper[4874]: E0217 17:34:29.459797 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:34:31 crc kubenswrapper[4874]: E0217 17:34:31.461446 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:34:34 crc kubenswrapper[4874]: I0217 17:34:34.458266 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:34:34 crc kubenswrapper[4874]: E0217 17:34:34.459040 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:34:40 crc kubenswrapper[4874]: I0217 17:34:40.141465 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fjkwc_cf7f0be2-b792-4603-a97c-53a2f335acee/prometheus-operator/0.log" Feb 17 17:34:40 crc kubenswrapper[4874]: I0217 17:34:40.176545 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-658d76db8d-jld5z_7a893bee-81e5-480e-8414-43a823e768fd/prometheus-operator-admission-webhook/0.log" Feb 17 17:34:40 crc kubenswrapper[4874]: I0217 17:34:40.188625 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-658d76db8d-nnvzg_5178b00a-11f3-48c6-96be-459a7b26be82/prometheus-operator-admission-webhook/0.log" Feb 17 17:34:40 crc kubenswrapper[4874]: I0217 17:34:40.355966 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-2b8tl_660c5439-82eb-4696-9df3-7968e680b5a9/operator/0.log" Feb 17 17:34:40 crc kubenswrapper[4874]: I0217 17:34:40.403550 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-mpn47_4771c857-23aa-4647-a63d-d7a1977ffaa4/observability-ui-dashboards/0.log" Feb 17 17:34:40 crc kubenswrapper[4874]: I0217 17:34:40.409515 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-b988z_a3d284b8-a322-4ce7-9a33-c82f3adafeb1/perses-operator/0.log" Feb 17 17:34:42 crc kubenswrapper[4874]: E0217 17:34:42.460147 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:34:43 crc kubenswrapper[4874]: E0217 17:34:43.459746 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:34:49 crc kubenswrapper[4874]: I0217 17:34:49.458037 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:34:49 crc kubenswrapper[4874]: E0217 17:34:49.460574 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:34:54 crc kubenswrapper[4874]: E0217 17:34:54.458656 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:34:55 crc kubenswrapper[4874]: I0217 17:34:55.788368 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-745c8c7958-q4zx9_e55e7660-9281-484b-b0b8-a39236b8e692/kube-rbac-proxy/0.log" Feb 17 17:34:55 crc kubenswrapper[4874]: I0217 17:34:55.933258 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-745c8c7958-q4zx9_e55e7660-9281-484b-b0b8-a39236b8e692/manager/0.log" Feb 17 17:34:57 crc kubenswrapper[4874]: E0217 17:34:57.460992 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.053191 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6pmc"] Feb 17 17:35:00 crc kubenswrapper[4874]: E0217 17:35:00.053878 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerName="extract-content" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.053892 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerName="extract-content" Feb 17 17:35:00 crc kubenswrapper[4874]: E0217 17:35:00.053905 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerName="extract-utilities" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.053911 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerName="extract-utilities" Feb 17 17:35:00 crc kubenswrapper[4874]: E0217 17:35:00.053967 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerName="registry-server" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.053973 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerName="registry-server" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.054185 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8ed584-79ce-4fa1-b3ad-fc9e1ae0e5fc" containerName="registry-server" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.056231 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.074703 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6pmc"] Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.091139 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz4nw\" (UniqueName: \"kubernetes.io/projected/3574bd58-50c8-45d4-b420-a1a0340d6e85-kube-api-access-zz4nw\") pod \"community-operators-q6pmc\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.091226 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-utilities\") pod \"community-operators-q6pmc\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.091296 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-catalog-content\") pod \"community-operators-q6pmc\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.193291 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz4nw\" (UniqueName: \"kubernetes.io/projected/3574bd58-50c8-45d4-b420-a1a0340d6e85-kube-api-access-zz4nw\") pod \"community-operators-q6pmc\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.194286 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-utilities\") pod \"community-operators-q6pmc\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.194753 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-catalog-content\") pod \"community-operators-q6pmc\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.194835 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-utilities\") pod \"community-operators-q6pmc\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.195124 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-catalog-content\") pod \"community-operators-q6pmc\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.212977 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz4nw\" (UniqueName: \"kubernetes.io/projected/3574bd58-50c8-45d4-b420-a1a0340d6e85-kube-api-access-zz4nw\") pod \"community-operators-q6pmc\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:00 crc kubenswrapper[4874]: I0217 17:35:00.387179 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:01 crc kubenswrapper[4874]: I0217 17:35:01.050604 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6pmc"] Feb 17 17:35:01 crc kubenswrapper[4874]: I0217 17:35:01.831949 4874 generic.go:334] "Generic (PLEG): container finished" podID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerID="5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb" exitCode=0 Feb 17 17:35:01 crc kubenswrapper[4874]: I0217 17:35:01.832241 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6pmc" event={"ID":"3574bd58-50c8-45d4-b420-a1a0340d6e85","Type":"ContainerDied","Data":"5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb"} Feb 17 17:35:01 crc kubenswrapper[4874]: I0217 17:35:01.832270 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6pmc" event={"ID":"3574bd58-50c8-45d4-b420-a1a0340d6e85","Type":"ContainerStarted","Data":"131d4eef3fc817bf47b5d44afc1c5850dbefa51dfd088e241baf47befc8411cf"} Feb 17 17:35:03 crc kubenswrapper[4874]: I0217 17:35:03.457624 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:35:03 crc kubenswrapper[4874]: E0217 17:35:03.458321 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:35:04 crc kubenswrapper[4874]: I0217 17:35:04.878900 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6pmc" event={"ID":"3574bd58-50c8-45d4-b420-a1a0340d6e85","Type":"ContainerStarted","Data":"5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd"} Feb 17 17:35:06 crc kubenswrapper[4874]: I0217 17:35:06.911234 4874 generic.go:334] "Generic (PLEG): container finished" podID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerID="5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd" exitCode=0 Feb 17 17:35:06 crc kubenswrapper[4874]: I0217 17:35:06.911311 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6pmc" event={"ID":"3574bd58-50c8-45d4-b420-a1a0340d6e85","Type":"ContainerDied","Data":"5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd"} Feb 17 17:35:06 crc kubenswrapper[4874]: I0217 17:35:06.915598 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:35:07 crc kubenswrapper[4874]: E0217 17:35:07.459846 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:35:07 crc kubenswrapper[4874]: I0217 17:35:07.950042 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6pmc" event={"ID":"3574bd58-50c8-45d4-b420-a1a0340d6e85","Type":"ContainerStarted","Data":"a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd"} Feb 17 17:35:07 crc kubenswrapper[4874]: I0217 17:35:07.984599 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6pmc" podStartSLOduration=2.524341276 podStartE2EDuration="7.984580738s" podCreationTimestamp="2026-02-17 17:35:00 +0000 UTC" firstStartedPulling="2026-02-17 17:35:01.835152705 +0000 UTC m=+5512.129541266" lastFinishedPulling="2026-02-17 17:35:07.295392167 +0000 UTC m=+5517.589780728" observedRunningTime="2026-02-17 17:35:07.975887343 +0000 UTC m=+5518.270275904" watchObservedRunningTime="2026-02-17 17:35:07.984580738 +0000 UTC m=+5518.278969299" Feb 17 17:35:10 crc kubenswrapper[4874]: I0217 17:35:10.387915 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:10 crc kubenswrapper[4874]: I0217 17:35:10.388290 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:11 crc kubenswrapper[4874]: I0217 17:35:11.492717 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-q6pmc" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerName="registry-server" probeResult="failure" output=< Feb 17 17:35:11 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 17:35:11 crc kubenswrapper[4874]: > Feb 17 17:35:11 crc kubenswrapper[4874]: E0217 17:35:11.581981 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:35:11 crc kubenswrapper[4874]: E0217 17:35:11.582348 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:35:11 crc kubenswrapper[4874]: E0217 17:35:11.582515 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:35:11 crc kubenswrapper[4874]: E0217 17:35:11.587173 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:35:17 crc kubenswrapper[4874]: I0217 17:35:17.458458 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:35:17 crc kubenswrapper[4874]: E0217 17:35:17.459265 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:35:20 crc kubenswrapper[4874]: E0217 17:35:20.470748 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:35:20 crc kubenswrapper[4874]: I0217 17:35:20.478996 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:20 crc kubenswrapper[4874]: I0217 17:35:20.538839 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:20 crc kubenswrapper[4874]: I0217 17:35:20.719704 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6pmc"] Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.102337 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6pmc" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerName="registry-server" containerID="cri-o://a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd" gracePeriod=2 Feb 17 17:35:22 crc kubenswrapper[4874]: E0217 17:35:22.462986 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.746136 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.828563 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-catalog-content\") pod \"3574bd58-50c8-45d4-b420-a1a0340d6e85\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.828761 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz4nw\" (UniqueName: \"kubernetes.io/projected/3574bd58-50c8-45d4-b420-a1a0340d6e85-kube-api-access-zz4nw\") pod \"3574bd58-50c8-45d4-b420-a1a0340d6e85\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.829090 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-utilities\") pod \"3574bd58-50c8-45d4-b420-a1a0340d6e85\" (UID: \"3574bd58-50c8-45d4-b420-a1a0340d6e85\") " Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.831471 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-utilities" (OuterVolumeSpecName: "utilities") pod "3574bd58-50c8-45d4-b420-a1a0340d6e85" (UID: "3574bd58-50c8-45d4-b420-a1a0340d6e85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.843650 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3574bd58-50c8-45d4-b420-a1a0340d6e85-kube-api-access-zz4nw" (OuterVolumeSpecName: "kube-api-access-zz4nw") pod "3574bd58-50c8-45d4-b420-a1a0340d6e85" (UID: "3574bd58-50c8-45d4-b420-a1a0340d6e85"). InnerVolumeSpecName "kube-api-access-zz4nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.902471 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3574bd58-50c8-45d4-b420-a1a0340d6e85" (UID: "3574bd58-50c8-45d4-b420-a1a0340d6e85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.931548 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.931574 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3574bd58-50c8-45d4-b420-a1a0340d6e85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:22 crc kubenswrapper[4874]: I0217 17:35:22.931585 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz4nw\" (UniqueName: \"kubernetes.io/projected/3574bd58-50c8-45d4-b420-a1a0340d6e85-kube-api-access-zz4nw\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.115697 4874 generic.go:334] "Generic (PLEG): container finished" podID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerID="a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd" exitCode=0 Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.115739 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6pmc" event={"ID":"3574bd58-50c8-45d4-b420-a1a0340d6e85","Type":"ContainerDied","Data":"a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd"} Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.115744 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6pmc" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.115764 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6pmc" event={"ID":"3574bd58-50c8-45d4-b420-a1a0340d6e85","Type":"ContainerDied","Data":"131d4eef3fc817bf47b5d44afc1c5850dbefa51dfd088e241baf47befc8411cf"} Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.115782 4874 scope.go:117] "RemoveContainer" containerID="a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.136395 4874 scope.go:117] "RemoveContainer" containerID="5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.156944 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6pmc"] Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.176068 4874 scope.go:117] "RemoveContainer" containerID="5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.180904 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6pmc"] Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.259549 4874 scope.go:117] "RemoveContainer" containerID="a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd" Feb 17 17:35:23 crc kubenswrapper[4874]: E0217 17:35:23.263570 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd\": container with ID starting with a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd not found: ID does not exist" containerID="a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.263623 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd"} err="failed to get container status \"a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd\": rpc error: code = NotFound desc = could not find container \"a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd\": container with ID starting with a44a5913f405ae6fbceff78da876d53416a2cc42bfcd34521411ab52181635cd not found: ID does not exist" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.263654 4874 scope.go:117] "RemoveContainer" containerID="5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd" Feb 17 17:35:23 crc kubenswrapper[4874]: E0217 17:35:23.270268 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd\": container with ID starting with 5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd not found: ID does not exist" containerID="5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.270318 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd"} err="failed to get container status \"5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd\": rpc error: code = NotFound desc = could not find container \"5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd\": container with ID starting with 5ac93a1b114952802060f34f867be045583693cb9d8d6a61222dc3aad584a4cd not found: ID does not exist" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.270347 4874 scope.go:117] "RemoveContainer" containerID="5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb" Feb 17 17:35:23 crc kubenswrapper[4874]: E0217 17:35:23.271711 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb\": container with ID starting with 5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb not found: ID does not exist" containerID="5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb" Feb 17 17:35:23 crc kubenswrapper[4874]: I0217 17:35:23.271751 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb"} err="failed to get container status \"5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb\": rpc error: code = NotFound desc = could not find container \"5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb\": container with ID starting with 5a191059434325a4ea4b7e0c70d2fe70b2eaf5b0e999ccb65cd5e00ac56b79cb not found: ID does not exist" Feb 17 17:35:24 crc kubenswrapper[4874]: I0217 17:35:24.476879 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" path="/var/lib/kubelet/pods/3574bd58-50c8-45d4-b420-a1a0340d6e85/volumes" Feb 17 17:35:32 crc kubenswrapper[4874]: I0217 17:35:32.457955 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:35:32 crc kubenswrapper[4874]: E0217 17:35:32.459310 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:35:34 crc kubenswrapper[4874]: E0217 17:35:34.610196 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:35:34 crc kubenswrapper[4874]: E0217 17:35:34.610723 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:35:34 crc kubenswrapper[4874]: E0217 17:35:34.610832 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:35:34 crc kubenswrapper[4874]: E0217 17:35:34.612129 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:35:37 crc kubenswrapper[4874]: E0217 17:35:37.461746 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:35:44 crc kubenswrapper[4874]: I0217 17:35:44.457428 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:35:44 crc kubenswrapper[4874]: E0217 17:35:44.458445 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:35:47 crc kubenswrapper[4874]: E0217 17:35:47.460158 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:35:48 crc kubenswrapper[4874]: E0217 17:35:48.459686 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:35:55 crc kubenswrapper[4874]: I0217 17:35:55.458045 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:35:55 crc kubenswrapper[4874]: E0217 17:35:55.461368 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cccdg_openshift-machine-config-operator(75d87243-c32f-4eb1-9049-24409fc6ea39)\"" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" Feb 17 17:36:01 crc kubenswrapper[4874]: E0217 17:36:01.459801 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:36:02 crc kubenswrapper[4874]: E0217 17:36:02.460193 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:36:07 crc kubenswrapper[4874]: I0217 17:36:07.457204 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:36:08 crc kubenswrapper[4874]: I0217 17:36:08.653199 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"2d3ed24c26aeb10b7df1ae1cc45d2e6b6ed8fdd81200374be7c87aa62624e4c6"} Feb 17 17:36:13 crc kubenswrapper[4874]: E0217 17:36:13.462393 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:36:16 crc kubenswrapper[4874]: E0217 17:36:16.467861 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:36:24 crc kubenswrapper[4874]: E0217 17:36:24.464350 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:36:28 crc kubenswrapper[4874]: E0217 17:36:28.462290 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:36:39 crc kubenswrapper[4874]: E0217 17:36:39.461404 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:36:42 crc kubenswrapper[4874]: E0217 17:36:42.461212 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:36:51 crc kubenswrapper[4874]: E0217 17:36:51.459781 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:36:53 crc kubenswrapper[4874]: I0217 17:36:53.203783 4874 generic.go:334] "Generic (PLEG): container finished" podID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" containerID="98e1677a11fed94ec851d62d6afd1c6a1d8036a400b262f62abe3bdbc663c619" exitCode=0 Feb 17 17:36:53 crc kubenswrapper[4874]: I0217 17:36:53.204068 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" event={"ID":"1a6b8617-c698-42b3-9ba1-329f44aab8aa","Type":"ContainerDied","Data":"98e1677a11fed94ec851d62d6afd1c6a1d8036a400b262f62abe3bdbc663c619"} Feb 17 17:36:53 crc kubenswrapper[4874]: I0217 17:36:53.204805 4874 scope.go:117] "RemoveContainer" containerID="98e1677a11fed94ec851d62d6afd1c6a1d8036a400b262f62abe3bdbc663c619" Feb 17 17:36:53 crc kubenswrapper[4874]: I0217 17:36:53.385370 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tr7wf_must-gather-9kwsh_1a6b8617-c698-42b3-9ba1-329f44aab8aa/gather/0.log" Feb 17 17:36:53 crc kubenswrapper[4874]: E0217 17:36:53.459179 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:37:00 crc kubenswrapper[4874]: I0217 17:37:00.846316 4874 trace.go:236] Trace[405890871]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (17-Feb-2026 17:36:59.248) (total time: 1597ms): Feb 17 17:37:00 crc kubenswrapper[4874]: Trace[405890871]: [1.597680335s] [1.597680335s] END Feb 17 17:37:01 crc kubenswrapper[4874]: I0217 17:37:01.460423 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tr7wf/must-gather-9kwsh"] Feb 17 17:37:01 crc kubenswrapper[4874]: I0217 17:37:01.461022 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" podUID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" containerName="copy" containerID="cri-o://3e1bdc5edd484ef9598bf0b3597bd0f09ad8d37e83c70e7eaaefe02f55717d02" gracePeriod=2 Feb 17 17:37:01 crc kubenswrapper[4874]: I0217 17:37:01.478875 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tr7wf/must-gather-9kwsh"] Feb 17 17:37:01 crc kubenswrapper[4874]: I0217 17:37:01.948924 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tr7wf_must-gather-9kwsh_1a6b8617-c698-42b3-9ba1-329f44aab8aa/copy/0.log" Feb 17 17:37:01 crc kubenswrapper[4874]: I0217 17:37:01.949770 4874 generic.go:334] "Generic (PLEG): container finished" podID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" containerID="3e1bdc5edd484ef9598bf0b3597bd0f09ad8d37e83c70e7eaaefe02f55717d02" exitCode=143 Feb 17 17:37:02 crc kubenswrapper[4874]: I0217 17:37:02.158430 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tr7wf_must-gather-9kwsh_1a6b8617-c698-42b3-9ba1-329f44aab8aa/copy/0.log" Feb 17 17:37:02 crc kubenswrapper[4874]: I0217 17:37:02.158960 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:37:02 crc kubenswrapper[4874]: I0217 17:37:02.264043 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a6b8617-c698-42b3-9ba1-329f44aab8aa-must-gather-output\") pod \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\" (UID: \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\") " Feb 17 17:37:02 crc kubenswrapper[4874]: I0217 17:37:02.264414 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59dz7\" (UniqueName: \"kubernetes.io/projected/1a6b8617-c698-42b3-9ba1-329f44aab8aa-kube-api-access-59dz7\") pod \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\" (UID: \"1a6b8617-c698-42b3-9ba1-329f44aab8aa\") " Feb 17 17:37:02 crc kubenswrapper[4874]: I0217 17:37:02.271174 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6b8617-c698-42b3-9ba1-329f44aab8aa-kube-api-access-59dz7" (OuterVolumeSpecName: "kube-api-access-59dz7") pod "1a6b8617-c698-42b3-9ba1-329f44aab8aa" (UID: "1a6b8617-c698-42b3-9ba1-329f44aab8aa"). InnerVolumeSpecName "kube-api-access-59dz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:37:02 crc kubenswrapper[4874]: I0217 17:37:02.370536 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59dz7\" (UniqueName: \"kubernetes.io/projected/1a6b8617-c698-42b3-9ba1-329f44aab8aa-kube-api-access-59dz7\") on node \"crc\" DevicePath \"\"" Feb 17 17:37:02 crc kubenswrapper[4874]: I0217 17:37:02.445500 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a6b8617-c698-42b3-9ba1-329f44aab8aa-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1a6b8617-c698-42b3-9ba1-329f44aab8aa" (UID: "1a6b8617-c698-42b3-9ba1-329f44aab8aa"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:37:02 crc kubenswrapper[4874]: E0217 17:37:02.459741 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:37:02 crc kubenswrapper[4874]: I0217 17:37:02.471849 4874 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a6b8617-c698-42b3-9ba1-329f44aab8aa-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 17:37:02 crc kubenswrapper[4874]: I0217 17:37:02.473401 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" path="/var/lib/kubelet/pods/1a6b8617-c698-42b3-9ba1-329f44aab8aa/volumes" Feb 17 17:37:03 crc kubenswrapper[4874]: I0217 17:37:03.000889 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tr7wf_must-gather-9kwsh_1a6b8617-c698-42b3-9ba1-329f44aab8aa/copy/0.log" Feb 17 17:37:03 crc kubenswrapper[4874]: I0217 17:37:03.001291 4874 scope.go:117] "RemoveContainer" containerID="3e1bdc5edd484ef9598bf0b3597bd0f09ad8d37e83c70e7eaaefe02f55717d02" Feb 17 17:37:03 crc kubenswrapper[4874]: I0217 17:37:03.001714 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tr7wf/must-gather-9kwsh" Feb 17 17:37:03 crc kubenswrapper[4874]: I0217 17:37:03.041147 4874 scope.go:117] "RemoveContainer" containerID="98e1677a11fed94ec851d62d6afd1c6a1d8036a400b262f62abe3bdbc663c619" Feb 17 17:37:06 crc kubenswrapper[4874]: E0217 17:37:06.460620 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:37:14 crc kubenswrapper[4874]: E0217 17:37:14.459755 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:37:19 crc kubenswrapper[4874]: E0217 17:37:19.459587 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:37:26 crc kubenswrapper[4874]: E0217 17:37:26.460656 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:37:30 crc kubenswrapper[4874]: E0217 17:37:30.467737 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:37:39 crc kubenswrapper[4874]: E0217 17:37:39.459056 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:37:45 crc kubenswrapper[4874]: E0217 17:37:45.459152 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:37:50 crc kubenswrapper[4874]: E0217 17:37:50.469979 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:37:58 crc kubenswrapper[4874]: E0217 17:37:58.460262 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:38:02 crc kubenswrapper[4874]: E0217 17:38:02.459224 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:38:11 crc kubenswrapper[4874]: E0217 17:38:11.459949 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:38:14 crc kubenswrapper[4874]: E0217 17:38:14.459527 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:38:24 crc kubenswrapper[4874]: E0217 17:38:24.460331 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:38:27 crc kubenswrapper[4874]: I0217 17:38:27.724554 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:38:27 crc kubenswrapper[4874]: I0217 17:38:27.725100 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:38:28 crc kubenswrapper[4874]: E0217 17:38:28.461137 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:38:35 crc kubenswrapper[4874]: E0217 17:38:35.459859 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:38:43 crc kubenswrapper[4874]: E0217 17:38:43.460063 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:38:46 crc kubenswrapper[4874]: E0217 17:38:46.460777 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:38:57 crc kubenswrapper[4874]: I0217 17:38:57.724994 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:38:57 crc kubenswrapper[4874]: I0217 17:38:57.725442 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:38:58 crc kubenswrapper[4874]: E0217 17:38:58.460214 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:39:02 crc kubenswrapper[4874]: E0217 17:39:01.459475 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:39:13 crc kubenswrapper[4874]: E0217 17:39:13.459583 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:39:13 crc kubenswrapper[4874]: E0217 17:39:13.459833 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:39:24 crc kubenswrapper[4874]: E0217 17:39:24.461233 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:39:25 crc kubenswrapper[4874]: E0217 17:39:25.460007 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:39:27 crc kubenswrapper[4874]: I0217 17:39:27.725110 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:39:27 crc kubenswrapper[4874]: I0217 17:39:27.725570 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:39:27 crc kubenswrapper[4874]: I0217 17:39:27.725613 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" Feb 17 17:39:27 crc kubenswrapper[4874]: I0217 17:39:27.726524 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d3ed24c26aeb10b7df1ae1cc45d2e6b6ed8fdd81200374be7c87aa62624e4c6"} pod="openshift-machine-config-operator/machine-config-daemon-cccdg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:39:27 crc kubenswrapper[4874]: I0217 17:39:27.726595 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" containerID="cri-o://2d3ed24c26aeb10b7df1ae1cc45d2e6b6ed8fdd81200374be7c87aa62624e4c6" gracePeriod=600 Feb 17 17:39:28 crc kubenswrapper[4874]: I0217 17:39:28.687782 4874 generic.go:334] "Generic (PLEG): container finished" podID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerID="2d3ed24c26aeb10b7df1ae1cc45d2e6b6ed8fdd81200374be7c87aa62624e4c6" exitCode=0 Feb 17 17:39:28 crc kubenswrapper[4874]: I0217 17:39:28.687846 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerDied","Data":"2d3ed24c26aeb10b7df1ae1cc45d2e6b6ed8fdd81200374be7c87aa62624e4c6"} Feb 17 17:39:28 crc kubenswrapper[4874]: I0217 17:39:28.688161 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" event={"ID":"75d87243-c32f-4eb1-9049-24409fc6ea39","Type":"ContainerStarted","Data":"598de6f45c96fc9b429d7d381c4f517c3155816c5bc331cb5f61a43b257bd4a3"} Feb 17 17:39:28 crc kubenswrapper[4874]: I0217 17:39:28.688186 4874 scope.go:117] "RemoveContainer" containerID="3d9d0d53ffed45cc67b402a7bca9a663e3eb985cbaeceb3013558f63c7a901f3" Feb 17 17:39:36 crc kubenswrapper[4874]: E0217 17:39:36.460301 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:39:37 crc kubenswrapper[4874]: E0217 17:39:37.459209 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:39:49 crc kubenswrapper[4874]: E0217 17:39:49.461779 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:39:49 crc kubenswrapper[4874]: E0217 17:39:49.465005 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.215375 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cxn82"] Feb 17 17:40:01 crc kubenswrapper[4874]: E0217 17:40:01.216383 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" containerName="copy" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.216397 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" containerName="copy" Feb 17 17:40:01 crc kubenswrapper[4874]: E0217 17:40:01.216416 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerName="extract-content" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.216422 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerName="extract-content" Feb 17 17:40:01 crc kubenswrapper[4874]: E0217 17:40:01.216445 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerName="extract-utilities" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.216452 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerName="extract-utilities" Feb 17 17:40:01 crc kubenswrapper[4874]: E0217 17:40:01.216495 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" containerName="gather" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.216504 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" containerName="gather" Feb 17 17:40:01 crc kubenswrapper[4874]: E0217 17:40:01.216516 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerName="registry-server" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.216524 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerName="registry-server" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.216745 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="3574bd58-50c8-45d4-b420-a1a0340d6e85" containerName="registry-server" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.216759 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" containerName="gather" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.216768 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a6b8617-c698-42b3-9ba1-329f44aab8aa" containerName="copy" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.218377 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.247539 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxn82"] Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.284553 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhncf\" (UniqueName: \"kubernetes.io/projected/79e40f47-e740-4de6-8395-f0851626ae63-kube-api-access-jhncf\") pod \"redhat-operators-cxn82\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.285864 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-utilities\") pod \"redhat-operators-cxn82\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.286040 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-catalog-content\") pod \"redhat-operators-cxn82\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.388504 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-catalog-content\") pod \"redhat-operators-cxn82\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.389339 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhncf\" (UniqueName: \"kubernetes.io/projected/79e40f47-e740-4de6-8395-f0851626ae63-kube-api-access-jhncf\") pod \"redhat-operators-cxn82\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.390174 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-utilities\") pod \"redhat-operators-cxn82\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.389138 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-catalog-content\") pod \"redhat-operators-cxn82\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.390469 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-utilities\") pod \"redhat-operators-cxn82\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.418166 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhncf\" (UniqueName: \"kubernetes.io/projected/79e40f47-e740-4de6-8395-f0851626ae63-kube-api-access-jhncf\") pod \"redhat-operators-cxn82\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:01 crc kubenswrapper[4874]: E0217 17:40:01.461557 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:40:01 crc kubenswrapper[4874]: I0217 17:40:01.551133 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:02 crc kubenswrapper[4874]: I0217 17:40:02.047779 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxn82"] Feb 17 17:40:02 crc kubenswrapper[4874]: I0217 17:40:02.074733 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxn82" event={"ID":"79e40f47-e740-4de6-8395-f0851626ae63","Type":"ContainerStarted","Data":"4b5f6226a4120954cb7c0e1e6e80d3cccd63909ca42b0d132b37058763ed5aa1"} Feb 17 17:40:03 crc kubenswrapper[4874]: I0217 17:40:03.094153 4874 generic.go:334] "Generic (PLEG): container finished" podID="79e40f47-e740-4de6-8395-f0851626ae63" containerID="e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813" exitCode=0 Feb 17 17:40:03 crc kubenswrapper[4874]: I0217 17:40:03.094456 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxn82" event={"ID":"79e40f47-e740-4de6-8395-f0851626ae63","Type":"ContainerDied","Data":"e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813"} Feb 17 17:40:03 crc kubenswrapper[4874]: E0217 17:40:03.458643 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:40:04 crc kubenswrapper[4874]: I0217 17:40:04.105686 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxn82" event={"ID":"79e40f47-e740-4de6-8395-f0851626ae63","Type":"ContainerStarted","Data":"394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20"} Feb 17 17:40:09 crc kubenswrapper[4874]: I0217 17:40:09.153209 4874 generic.go:334] "Generic (PLEG): container finished" podID="79e40f47-e740-4de6-8395-f0851626ae63" containerID="394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20" exitCode=0 Feb 17 17:40:09 crc kubenswrapper[4874]: I0217 17:40:09.153305 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxn82" event={"ID":"79e40f47-e740-4de6-8395-f0851626ae63","Type":"ContainerDied","Data":"394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20"} Feb 17 17:40:09 crc kubenswrapper[4874]: I0217 17:40:09.156849 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:40:10 crc kubenswrapper[4874]: I0217 17:40:10.167487 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxn82" event={"ID":"79e40f47-e740-4de6-8395-f0851626ae63","Type":"ContainerStarted","Data":"d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d"} Feb 17 17:40:10 crc kubenswrapper[4874]: I0217 17:40:10.197939 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cxn82" podStartSLOduration=2.503466106 podStartE2EDuration="9.197913143s" podCreationTimestamp="2026-02-17 17:40:01 +0000 UTC" firstStartedPulling="2026-02-17 17:40:03.096830896 +0000 UTC m=+5813.391219457" lastFinishedPulling="2026-02-17 17:40:09.791277923 +0000 UTC m=+5820.085666494" observedRunningTime="2026-02-17 17:40:10.187919776 +0000 UTC m=+5820.482308347" watchObservedRunningTime="2026-02-17 17:40:10.197913143 +0000 UTC m=+5820.492301724" Feb 17 17:40:11 crc kubenswrapper[4874]: I0217 17:40:11.552320 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:11 crc kubenswrapper[4874]: I0217 17:40:11.552372 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:12 crc kubenswrapper[4874]: I0217 17:40:12.613947 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cxn82" podUID="79e40f47-e740-4de6-8395-f0851626ae63" containerName="registry-server" probeResult="failure" output=< Feb 17 17:40:12 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Feb 17 17:40:12 crc kubenswrapper[4874]: > Feb 17 17:40:13 crc kubenswrapper[4874]: E0217 17:40:13.641164 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:40:13 crc kubenswrapper[4874]: E0217 17:40:13.641326 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:40:13 crc kubenswrapper[4874]: E0217 17:40:13.641499 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h646hf7h59dhdch5d8h679h9dhdch5c7hd5h5bch655h5bfh674h596h5d6h64dh65bh694h67fh66dh5bdhd7h568h697h58bh5b4h59fh694h584h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cc29c300-b515-47d8-9326-1839ed7772b4): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:40:13 crc kubenswrapper[4874]: E0217 17:40:13.642933 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:40:17 crc kubenswrapper[4874]: E0217 17:40:17.462253 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:40:21 crc kubenswrapper[4874]: I0217 17:40:21.606552 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:21 crc kubenswrapper[4874]: I0217 17:40:21.684214 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:21 crc kubenswrapper[4874]: I0217 17:40:21.865803 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cxn82"] Feb 17 17:40:23 crc kubenswrapper[4874]: I0217 17:40:23.319283 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cxn82" podUID="79e40f47-e740-4de6-8395-f0851626ae63" containerName="registry-server" containerID="cri-o://d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d" gracePeriod=2 Feb 17 17:40:23 crc kubenswrapper[4874]: I0217 17:40:23.947487 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.048414 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-utilities\") pod \"79e40f47-e740-4de6-8395-f0851626ae63\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.048985 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhncf\" (UniqueName: \"kubernetes.io/projected/79e40f47-e740-4de6-8395-f0851626ae63-kube-api-access-jhncf\") pod \"79e40f47-e740-4de6-8395-f0851626ae63\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.049061 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-catalog-content\") pod \"79e40f47-e740-4de6-8395-f0851626ae63\" (UID: \"79e40f47-e740-4de6-8395-f0851626ae63\") " Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.049336 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-utilities" (OuterVolumeSpecName: "utilities") pod "79e40f47-e740-4de6-8395-f0851626ae63" (UID: "79e40f47-e740-4de6-8395-f0851626ae63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.049727 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.055825 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e40f47-e740-4de6-8395-f0851626ae63-kube-api-access-jhncf" (OuterVolumeSpecName: "kube-api-access-jhncf") pod "79e40f47-e740-4de6-8395-f0851626ae63" (UID: "79e40f47-e740-4de6-8395-f0851626ae63"). InnerVolumeSpecName "kube-api-access-jhncf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.151504 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhncf\" (UniqueName: \"kubernetes.io/projected/79e40f47-e740-4de6-8395-f0851626ae63-kube-api-access-jhncf\") on node \"crc\" DevicePath \"\"" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.179053 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79e40f47-e740-4de6-8395-f0851626ae63" (UID: "79e40f47-e740-4de6-8395-f0851626ae63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.255772 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e40f47-e740-4de6-8395-f0851626ae63-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.333833 4874 generic.go:334] "Generic (PLEG): container finished" podID="79e40f47-e740-4de6-8395-f0851626ae63" containerID="d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d" exitCode=0 Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.333897 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxn82" event={"ID":"79e40f47-e740-4de6-8395-f0851626ae63","Type":"ContainerDied","Data":"d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d"} Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.333932 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxn82" event={"ID":"79e40f47-e740-4de6-8395-f0851626ae63","Type":"ContainerDied","Data":"4b5f6226a4120954cb7c0e1e6e80d3cccd63909ca42b0d132b37058763ed5aa1"} Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.333928 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxn82" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.333958 4874 scope.go:117] "RemoveContainer" containerID="d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.356392 4874 scope.go:117] "RemoveContainer" containerID="394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.377414 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cxn82"] Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.388462 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cxn82"] Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.400306 4874 scope.go:117] "RemoveContainer" containerID="e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.454307 4874 scope.go:117] "RemoveContainer" containerID="d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d" Feb 17 17:40:24 crc kubenswrapper[4874]: E0217 17:40:24.454796 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d\": container with ID starting with d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d not found: ID does not exist" containerID="d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.454833 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d"} err="failed to get container status \"d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d\": rpc error: code = NotFound desc = could not find container \"d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d\": container with ID starting with d3d406d7350952ff353ddcfbd41a02d0cec29189f12c38bfa3e9a1550b695d6d not found: ID does not exist" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.454860 4874 scope.go:117] "RemoveContainer" containerID="394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20" Feb 17 17:40:24 crc kubenswrapper[4874]: E0217 17:40:24.455275 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20\": container with ID starting with 394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20 not found: ID does not exist" containerID="394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.455319 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20"} err="failed to get container status \"394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20\": rpc error: code = NotFound desc = could not find container \"394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20\": container with ID starting with 394fb2e9081f9fb7d1ea8791805a742fcac258fd289c44850e0c0a84184ffd20 not found: ID does not exist" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.455349 4874 scope.go:117] "RemoveContainer" containerID="e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813" Feb 17 17:40:24 crc kubenswrapper[4874]: E0217 17:40:24.455633 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813\": container with ID starting with e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813 not found: ID does not exist" containerID="e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.455665 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813"} err="failed to get container status \"e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813\": rpc error: code = NotFound desc = could not find container \"e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813\": container with ID starting with e1a27b90253be035774e24ea207e4857b19caa1720857b661abcc112d027a813 not found: ID does not exist" Feb 17 17:40:24 crc kubenswrapper[4874]: I0217 17:40:24.473842 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e40f47-e740-4de6-8395-f0851626ae63" path="/var/lib/kubelet/pods/79e40f47-e740-4de6-8395-f0851626ae63/volumes" Feb 17 17:40:25 crc kubenswrapper[4874]: E0217 17:40:25.463340 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:40:28 crc kubenswrapper[4874]: E0217 17:40:28.459830 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:40:40 crc kubenswrapper[4874]: E0217 17:40:40.469195 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:40:42 crc kubenswrapper[4874]: E0217 17:40:42.593313 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:40:42 crc kubenswrapper[4874]: E0217 17:40:42.593650 4874 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:40:42 crc kubenswrapper[4874]: E0217 17:40:42.593800 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtgnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ddhb8_openstack(122736d5-78f5-42dc-b6ab-343724bac19d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:40:42 crc kubenswrapper[4874]: E0217 17:40:42.595042 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.792402 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l6kpc"] Feb 17 17:40:49 crc kubenswrapper[4874]: E0217 17:40:49.793549 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e40f47-e740-4de6-8395-f0851626ae63" containerName="extract-utilities" Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.793567 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e40f47-e740-4de6-8395-f0851626ae63" containerName="extract-utilities" Feb 17 17:40:49 crc kubenswrapper[4874]: E0217 17:40:49.793627 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e40f47-e740-4de6-8395-f0851626ae63" containerName="extract-content" Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.793635 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e40f47-e740-4de6-8395-f0851626ae63" containerName="extract-content" Feb 17 17:40:49 crc kubenswrapper[4874]: E0217 17:40:49.793648 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e40f47-e740-4de6-8395-f0851626ae63" containerName="registry-server" Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.793655 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e40f47-e740-4de6-8395-f0851626ae63" containerName="registry-server" Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.793919 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e40f47-e740-4de6-8395-f0851626ae63" containerName="registry-server" Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.796172 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.811303 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6kpc"] Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.937697 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8v7d\" (UniqueName: \"kubernetes.io/projected/584cad6b-ec14-4692-bf03-abfaef78adb1-kube-api-access-k8v7d\") pod \"redhat-marketplace-l6kpc\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.938063 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-catalog-content\") pod \"redhat-marketplace-l6kpc\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:49 crc kubenswrapper[4874]: I0217 17:40:49.938233 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-utilities\") pod \"redhat-marketplace-l6kpc\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:50 crc kubenswrapper[4874]: I0217 17:40:50.040877 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-utilities\") pod \"redhat-marketplace-l6kpc\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:50 crc kubenswrapper[4874]: I0217 17:40:50.041028 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8v7d\" (UniqueName: \"kubernetes.io/projected/584cad6b-ec14-4692-bf03-abfaef78adb1-kube-api-access-k8v7d\") pod \"redhat-marketplace-l6kpc\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:50 crc kubenswrapper[4874]: I0217 17:40:50.041060 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-catalog-content\") pod \"redhat-marketplace-l6kpc\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:50 crc kubenswrapper[4874]: I0217 17:40:50.041525 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-catalog-content\") pod \"redhat-marketplace-l6kpc\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:50 crc kubenswrapper[4874]: I0217 17:40:50.041740 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-utilities\") pod \"redhat-marketplace-l6kpc\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:50 crc kubenswrapper[4874]: I0217 17:40:50.074054 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8v7d\" (UniqueName: \"kubernetes.io/projected/584cad6b-ec14-4692-bf03-abfaef78adb1-kube-api-access-k8v7d\") pod \"redhat-marketplace-l6kpc\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:50 crc kubenswrapper[4874]: I0217 17:40:50.122062 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:40:50 crc kubenswrapper[4874]: I0217 17:40:50.711463 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6kpc"] Feb 17 17:40:51 crc kubenswrapper[4874]: I0217 17:40:51.636365 4874 generic.go:334] "Generic (PLEG): container finished" podID="584cad6b-ec14-4692-bf03-abfaef78adb1" containerID="1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0" exitCode=0 Feb 17 17:40:51 crc kubenswrapper[4874]: I0217 17:40:51.636561 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6kpc" event={"ID":"584cad6b-ec14-4692-bf03-abfaef78adb1","Type":"ContainerDied","Data":"1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0"} Feb 17 17:40:51 crc kubenswrapper[4874]: I0217 17:40:51.636658 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6kpc" event={"ID":"584cad6b-ec14-4692-bf03-abfaef78adb1","Type":"ContainerStarted","Data":"8cee96b3581fd41565e3ab73dd811ed7df494af69970d8cf373fcf05dea5ce11"} Feb 17 17:40:52 crc kubenswrapper[4874]: I0217 17:40:52.648448 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6kpc" event={"ID":"584cad6b-ec14-4692-bf03-abfaef78adb1","Type":"ContainerStarted","Data":"d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c"} Feb 17 17:40:53 crc kubenswrapper[4874]: E0217 17:40:53.460390 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:40:53 crc kubenswrapper[4874]: I0217 17:40:53.660488 4874 generic.go:334] "Generic (PLEG): container finished" podID="584cad6b-ec14-4692-bf03-abfaef78adb1" containerID="d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c" exitCode=0 Feb 17 17:40:53 crc kubenswrapper[4874]: I0217 17:40:53.660665 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6kpc" event={"ID":"584cad6b-ec14-4692-bf03-abfaef78adb1","Type":"ContainerDied","Data":"d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c"} Feb 17 17:40:54 crc kubenswrapper[4874]: E0217 17:40:54.459990 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:40:54 crc kubenswrapper[4874]: I0217 17:40:54.674142 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6kpc" event={"ID":"584cad6b-ec14-4692-bf03-abfaef78adb1","Type":"ContainerStarted","Data":"10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5"} Feb 17 17:40:54 crc kubenswrapper[4874]: I0217 17:40:54.707978 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l6kpc" podStartSLOduration=3.298825747 podStartE2EDuration="5.707957407s" podCreationTimestamp="2026-02-17 17:40:49 +0000 UTC" firstStartedPulling="2026-02-17 17:40:51.639210009 +0000 UTC m=+5861.933598610" lastFinishedPulling="2026-02-17 17:40:54.048341699 +0000 UTC m=+5864.342730270" observedRunningTime="2026-02-17 17:40:54.696063663 +0000 UTC m=+5864.990452224" watchObservedRunningTime="2026-02-17 17:40:54.707957407 +0000 UTC m=+5865.002345968" Feb 17 17:41:00 crc kubenswrapper[4874]: I0217 17:41:00.123118 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:41:00 crc kubenswrapper[4874]: I0217 17:41:00.123553 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:41:00 crc kubenswrapper[4874]: I0217 17:41:00.182974 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:41:00 crc kubenswrapper[4874]: I0217 17:41:00.808090 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:41:00 crc kubenswrapper[4874]: I0217 17:41:00.879589 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6kpc"] Feb 17 17:41:02 crc kubenswrapper[4874]: I0217 17:41:02.769242 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l6kpc" podUID="584cad6b-ec14-4692-bf03-abfaef78adb1" containerName="registry-server" containerID="cri-o://10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5" gracePeriod=2 Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.340087 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.538282 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8v7d\" (UniqueName: \"kubernetes.io/projected/584cad6b-ec14-4692-bf03-abfaef78adb1-kube-api-access-k8v7d\") pod \"584cad6b-ec14-4692-bf03-abfaef78adb1\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.538471 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-utilities\") pod \"584cad6b-ec14-4692-bf03-abfaef78adb1\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.538586 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-catalog-content\") pod \"584cad6b-ec14-4692-bf03-abfaef78adb1\" (UID: \"584cad6b-ec14-4692-bf03-abfaef78adb1\") " Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.539769 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-utilities" (OuterVolumeSpecName: "utilities") pod "584cad6b-ec14-4692-bf03-abfaef78adb1" (UID: "584cad6b-ec14-4692-bf03-abfaef78adb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.555209 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584cad6b-ec14-4692-bf03-abfaef78adb1-kube-api-access-k8v7d" (OuterVolumeSpecName: "kube-api-access-k8v7d") pod "584cad6b-ec14-4692-bf03-abfaef78adb1" (UID: "584cad6b-ec14-4692-bf03-abfaef78adb1"). InnerVolumeSpecName "kube-api-access-k8v7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.589961 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584cad6b-ec14-4692-bf03-abfaef78adb1" (UID: "584cad6b-ec14-4692-bf03-abfaef78adb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.644649 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.644725 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584cad6b-ec14-4692-bf03-abfaef78adb1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.644751 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8v7d\" (UniqueName: \"kubernetes.io/projected/584cad6b-ec14-4692-bf03-abfaef78adb1-kube-api-access-k8v7d\") on node \"crc\" DevicePath \"\"" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.779708 4874 generic.go:334] "Generic (PLEG): container finished" podID="584cad6b-ec14-4692-bf03-abfaef78adb1" containerID="10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5" exitCode=0 Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.779761 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l6kpc" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.779759 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6kpc" event={"ID":"584cad6b-ec14-4692-bf03-abfaef78adb1","Type":"ContainerDied","Data":"10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5"} Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.779936 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l6kpc" event={"ID":"584cad6b-ec14-4692-bf03-abfaef78adb1","Type":"ContainerDied","Data":"8cee96b3581fd41565e3ab73dd811ed7df494af69970d8cf373fcf05dea5ce11"} Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.779961 4874 scope.go:117] "RemoveContainer" containerID="10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.805382 4874 scope.go:117] "RemoveContainer" containerID="d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.839757 4874 scope.go:117] "RemoveContainer" containerID="1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.878701 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6kpc"] Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.903268 4874 scope.go:117] "RemoveContainer" containerID="10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5" Feb 17 17:41:03 crc kubenswrapper[4874]: E0217 17:41:03.903832 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5\": container with ID starting with 10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5 not found: ID does not exist" containerID="10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.903877 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5"} err="failed to get container status \"10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5\": rpc error: code = NotFound desc = could not find container \"10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5\": container with ID starting with 10f7cada2cf1d6e1693a134585c4943eff62186d8a2ad57b46e7ba970bbcfaa5 not found: ID does not exist" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.903905 4874 scope.go:117] "RemoveContainer" containerID="d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c" Feb 17 17:41:03 crc kubenswrapper[4874]: E0217 17:41:03.904229 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c\": container with ID starting with d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c not found: ID does not exist" containerID="d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.904262 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c"} err="failed to get container status \"d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c\": rpc error: code = NotFound desc = could not find container \"d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c\": container with ID starting with d805e0ef8a29c62163368313f177d6669acbdce257db01e7674f32988cf9c02c not found: ID does not exist" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.904288 4874 scope.go:117] "RemoveContainer" containerID="1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0" Feb 17 17:41:03 crc kubenswrapper[4874]: E0217 17:41:03.904547 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0\": container with ID starting with 1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0 not found: ID does not exist" containerID="1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.904601 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0"} err="failed to get container status \"1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0\": rpc error: code = NotFound desc = could not find container \"1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0\": container with ID starting with 1390b72b3f07144cd5ab4c1903845e876dc54797d03d696c9a512371083f04a0 not found: ID does not exist" Feb 17 17:41:03 crc kubenswrapper[4874]: I0217 17:41:03.905775 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l6kpc"] Feb 17 17:41:04 crc kubenswrapper[4874]: I0217 17:41:04.470907 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584cad6b-ec14-4692-bf03-abfaef78adb1" path="/var/lib/kubelet/pods/584cad6b-ec14-4692-bf03-abfaef78adb1/volumes" Feb 17 17:41:07 crc kubenswrapper[4874]: E0217 17:41:07.460607 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:41:08 crc kubenswrapper[4874]: E0217 17:41:08.464429 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:41:20 crc kubenswrapper[4874]: E0217 17:41:20.471694 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:41:20 crc kubenswrapper[4874]: E0217 17:41:20.471703 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:41:34 crc kubenswrapper[4874]: E0217 17:41:34.459907 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:41:35 crc kubenswrapper[4874]: E0217 17:41:35.459987 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:41:46 crc kubenswrapper[4874]: E0217 17:41:46.460338 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:41:48 crc kubenswrapper[4874]: E0217 17:41:48.460526 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:41:57 crc kubenswrapper[4874]: I0217 17:41:57.725005 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:41:57 crc kubenswrapper[4874]: I0217 17:41:57.725624 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:41:58 crc kubenswrapper[4874]: E0217 17:41:58.459730 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:42:03 crc kubenswrapper[4874]: E0217 17:42:03.460580 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:42:11 crc kubenswrapper[4874]: E0217 17:42:11.459267 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:42:15 crc kubenswrapper[4874]: E0217 17:42:15.459444 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:42:23 crc kubenswrapper[4874]: E0217 17:42:23.460527 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:42:27 crc kubenswrapper[4874]: I0217 17:42:27.724609 4874 patch_prober.go:28] interesting pod/machine-config-daemon-cccdg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:42:27 crc kubenswrapper[4874]: I0217 17:42:27.725191 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cccdg" podUID="75d87243-c32f-4eb1-9049-24409fc6ea39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:42:30 crc kubenswrapper[4874]: E0217 17:42:30.470732 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:42:36 crc kubenswrapper[4874]: E0217 17:42:36.459761 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4" Feb 17 17:42:43 crc kubenswrapper[4874]: E0217 17:42:43.464804 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-ddhb8" podUID="122736d5-78f5-42dc-b6ab-343724bac19d" Feb 17 17:42:47 crc kubenswrapper[4874]: E0217 17:42:47.459947 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="cc29c300-b515-47d8-9326-1839ed7772b4"